<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chuck Ha</title>
    <description>The latest articles on DEV Community by Chuck Ha (@chuck_ha).</description>
    <link>https://dev.to/chuck_ha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chuck_ha"/>
    <language>en</language>
    <item>
      <title>Data types in Kubernetes: PriorityQueue</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Mon, 22 Apr 2019 14:57:32 +0000</pubDate>
      <link>https://dev.to/chuck_ha/data-types-in-kubernetes-priorityqueue-38d2</link>
      <guid>https://dev.to/chuck_ha/data-types-in-kubernetes-priorityqueue-38d2</guid>
      <description>

&lt;p&gt;Kubernetes Version: &lt;code&gt;v1.13.2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler (&lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/"&gt;kube-scheduler&lt;/a&gt;) is the component of Kubernetes that is responsible for assigning pods to nodes. This is called scheduling and thus the name kube-scheduler. The kube-scheduler has a feature called &lt;a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/"&gt;pod priority and preemption&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Pod priority and preemption allows pods to be assigned a priority value in the pod spec. During pod scheduling, Kubernetes will take the priority into account. Pods with a higher priority will be scheduled ahead of lower priority pods. Additionally, lower priority pods can be evicted in favor of a higher priority pod in low resource situations.&lt;/p&gt;

&lt;p&gt;First, let's look at how the kube-scheduler works. The kube-scheduler &lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/pkg/scheduler/scheduler.go#L509"&gt;has access to a queue of pods&lt;/a&gt; that need to be scheduled. Whenever a pod is created or modified, the pod is added to the kube-scheduler's pod queue. The kube-scheduler &lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/pkg/scheduler/factory/factory.go#L113-L117"&gt;waits for pods to exist on the pod queue&lt;/a&gt;, dequeues a pod and schedules it.&lt;/p&gt;

&lt;p&gt;The kube-scheduler's pod queue can either return pods in the order in which the pods joined the queue or based on the pod's priority. In the case of first-in-first-out, if 5 pods are created and need to be scheduled, the kube-scheduler may schedule the first 4 and then run out of resources for the last pod. If that last pod was the most important pod to schedule then you are out of luck. You'll have to delete some pods until you have enough resources to allow the scheduler to schedule it.&lt;/p&gt;

&lt;p&gt;If the kube-scheduler's pod queue is based on priority then you have the ability to assign priorities to pods. If the same 5 pods get scheduled but each have a different priority, the highest priority will be scheduled first, or, if it comes in after the other 4 have already been scheduled and there are no more resources, the kube-scheduler will try to move the lower priority pods around until there are enough resources on a node for the highest priority pod to be scheduled.&lt;/p&gt;

&lt;p&gt;The kube-scheduler is able to swap out the implementation of the pod queue because it is abstracted behind an &lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/pkg/scheduler/internal/queue/scheduling_queue.go#L54"&gt;interface&lt;/a&gt;. If pod priority is disabled then the kube-scheduler will use a plain first-in-first-out queue as the data structure to satisfy the scheduling queue interface. However in the default case where pod priority is enabled, the kube-scheduler will use a priority queue to implement the scheduling queue.&lt;/p&gt;

&lt;p&gt;A priority queue has the same interface as a regular queue:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="x"&gt; &lt;/span&gt;&lt;span class="n"&gt;Queue&lt;/span&gt;&lt;span class="x"&gt; &lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="x"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="x"&gt;

    &lt;/span&gt;&lt;span class="c"&gt;// Put an item on the queue&lt;/span&gt;&lt;span class="x"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;Enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;&lt;span class="x"&gt;

    &lt;/span&gt;&lt;span class="c"&gt;// Remove an item from the queue&lt;/span&gt;&lt;span class="x"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;Dequeue&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="x"&gt; &lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="x"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="x"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The difference is entirely in the underlying implementation.&lt;/p&gt;

&lt;p&gt;In Go it's common to see a &lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/staging/src/k8s.io/client-go/tools/cache/fifo.go#L98"&gt;first-in-first-out queue implemented with a slice&lt;/a&gt; or a channel as the underlying data structure.&lt;/p&gt;

&lt;p&gt;If the queue is a priority queue instead of a first-in-first-out queue then a heap can be used as the data structure to implement the queue creating a priority queue.&lt;/p&gt;

&lt;p&gt;Go has a built in heap interface defined in the &lt;a href="https://golang.org/pkg/container/heap/"&gt;&lt;code&gt;container&lt;/code&gt;&lt;/a&gt; package. This lets you implement some methods of the heap interface and get the common heap operations implemented efficiently by the Go library authors.&lt;/p&gt;

&lt;p&gt;We can see the two queue implementations in the kube-scheduler's code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/pkg/scheduler/internal/queue/scheduling_queue.go#L91"&gt;A first-in-first-out queue&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.13.2/pkg/scheduler/internal/queue/scheduling_queue.go#L195"&gt;A priority queue&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To recap from an outside-in-perspective:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The kube-scheduler depends on a SchedulingQueue to get the next pod to schedule. &lt;/li&gt;
&lt;li&gt;The SchedulingQueue can be implemented in a variety of ways but by default uses
a priority queue (pod priority is enabled). &lt;/li&gt;
&lt;li&gt;The priority queue uses a heap to keep a priority-based order of the pods it knows about. This means that when the kube-scheduler gets the next pod to schedule, the pod with the highest priority is returned first.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please give me feedback if you feel the desire! I'm always looking to improve my technical writing skill.&lt;/p&gt;


</description>
      <category>kubernetes</category>
      <category>datatypes</category>
      <category>go</category>
      <category>queue</category>
    </item>
    <item>
      <title>An Error Wrapping Strategy for Go</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Tue, 20 Nov 2018 21:07:39 +0000</pubDate>
      <link>https://dev.to/chuck_ha/an-error-wrapping-strategy-for-go-14i1</link>
      <guid>https://dev.to/chuck_ha/an-error-wrapping-strategy-for-go-14i1</guid>
      <description>&lt;p&gt;Does this look output familiar?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error running program: error parsing yaml: error opening file: error no such file "somefile.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is a pseudo-stack trace that you've likely encountered. It's better than just getting &lt;code&gt;error no such file "somefile.yaml"&lt;/code&gt; as your error since this at least gives a bit of context as to where the error came from. But it's not great. It's hard to trace the code and not very pretty to read. The code that generated this error might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"error running program: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;parseYAML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"somefile.yaml"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"error parsing yaml: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;parseYAML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"error opening file: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://play.golang.org/p/YAclBp1ebMQ" rel="noopener noreferrer"&gt;Link to the go playground with the code above&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This method is both prone to programmer error and it can still be a pain to trace the code without line numbers and file information. The improvement that can be made to this code is to record the stack trace!&lt;/p&gt;

&lt;p&gt;The code above should look more like this after this improvement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"encountered an error: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;stackerr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;StackTrace&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stackerr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StackTrace&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;parseYAML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"somefile.yaml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// A local error type that holds a stack trace&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Error&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Err&lt;/span&gt;   &lt;span class="kt"&gt;error&lt;/span&gt;
    &lt;span class="n"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;StackTrace&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;StackTrace&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;StackTrace&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;parseYAML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c"&gt;// keep the original error that was found&lt;/span&gt;
            &lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="c"&gt;// Modify it with the stack trace&lt;/span&gt;
            &lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://play.golang.org/p/Rg66y0QBCc8" rel="noopener noreferrer"&gt;Link to the go playground with the code above&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the output specifies exactly what error occurred and where it occurred.&lt;/p&gt;

&lt;p&gt;The rules for this refactoring are pretty straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrap errors at the edge of your program.&lt;/li&gt;
&lt;li&gt;Print the stack at the top level. &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Rule 1
&lt;/h3&gt;

&lt;p&gt;Technically, this rule should be "wrap any error that doesn't already have a stack trace" but since there is no guarantee that libraries will add a stack trace to the errors they return it is easier to assume no library will add a stack trace and to always add one. The standard library is no exception to this rule.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rule 2
&lt;/h3&gt;

&lt;p&gt;Printing or logging the stack is very important, especially when the system/environment that generated the stack is unaccessible. In the open source world it's very common to get an issue where the submitter copy and pastes the output of the command they ran and the error they encountered. If your error doesn't have a stack trace, good luck trying to get someone to asynchronously rerun your program or a custom binary or with a special flag under the same circumstance. Some people do and that's great, but it would be fantastic if this information were available on every issue.&lt;/p&gt;

&lt;p&gt;If you're looking for a library that does this for you, look no further than &lt;a href="https://github.com/pkg/errors" rel="noopener noreferrer"&gt;https://github.com/pkg/errors&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>errors</category>
    </item>
    <item>
      <title>Forcing a kubernetes version during build</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Mon, 19 Nov 2018 21:06:09 +0000</pubDate>
      <link>https://dev.to/chuck_ha/forcing-a-kubernetes-version-during-build-5dfi</link>
      <guid>https://dev.to/chuck_ha/forcing-a-kubernetes-version-during-build-5dfi</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Date: November, 19th, 2018
Kubernetes: https://github.com/kubernetes/kubernetes/commit/8848740f6d0f84c2c4c5165736e12425551a6207
The current release is just before v1.13.0 so the newest tag is v1.14.0-alpha.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a &lt;a href="https://github.com/kubernetes/kubernetes/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; binary is built with &lt;a href="https://bazel.build/" rel="noopener noreferrer"&gt;bazel&lt;/a&gt; it will set a default version on the binary for you. That is, when you run &lt;code&gt;kubectl version&lt;/code&gt; the version is populated automatically.&lt;/p&gt;

&lt;p&gt;However, sometimes you want a custom version, probably for testing purposes. For example, if you are testing &lt;a href="https://github.com/kubernetes/kubernetes/tree/master/cmd/kubeadm" rel="noopener noreferrer"&gt;kubeadm&lt;/a&gt;'s upgrade feature it helps to force a version to avoid some annoying behavior.&lt;/p&gt;

&lt;p&gt;Bazel calculates this version through a script that is defined by a flag called &lt;code&gt;--workspace_status_command&lt;/code&gt;. That argument is specified in the &lt;a href="https://github.com/kubernetes/kubernetes/blob/8848740f6d0f84c2c4c5165736e12425551a6207/build/root/.bazelrc" rel="noopener noreferrer"&gt;&lt;code&gt;.bazelrc&lt;/code&gt; file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;print-workspace-status.sh&lt;/code&gt; calls &lt;a href="https://github.com/kubernetes/kubernetes/blob/8848740f6d0f84c2c4c5165736e12425551a6207/hack/lib/version.sh#L34" rel="noopener noreferrer"&gt;a bash function&lt;/a&gt; and prints some version information in a very specific format that bazel uses.&lt;/p&gt;

&lt;p&gt;If you want to customize this, you have two options.&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1 You can define a custom &lt;code&gt;--workspace_status_command&lt;/code&gt; script that generates these versions with whatever versions you want
&lt;/h4&gt;

&lt;p&gt;Create this file as &lt;code&gt;workspace-status.sh&lt;/code&gt; and make it executable &lt;code&gt;chmod +x workspace-status.sh&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
gitCommit &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-parse &lt;span class="s2"&gt;"HEAD^{commit}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
gitTreeState clean
gitVersion v2.0.0
gitMajor 2
gitMinor 0
buildDate &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SOURCE_DATE_EPOCH&lt;/span&gt;:+&lt;span class="s2"&gt;"--date=@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SOURCE_DATE_EPOCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;-u&lt;/span&gt; +&lt;span class="s1"&gt;'%Y-%m-%dT%H:%M:%SZ'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run &lt;code&gt;bazel build --workspace_status_command=./workspace-status.sh //cmd/kubeadm&lt;/code&gt; and you can see it works when you run &lt;code&gt;kubeadm version&lt;/code&gt; and get this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm version
kubeadm version: &amp;amp;version.Info{Major:"2", Minor:"0", GitVersion:"v2.0.0", GitCommit:"679d4397cfdb386ebd3ae4bcb9972273b3f75ca3", GitTreeState:"clean", BuildDate:"2018-11-19T20:43:30Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Option 2
&lt;/h4&gt;

&lt;p&gt;You can define an environment variable, &lt;code&gt;KUBE_GIT_VERSION_FILE&lt;/code&gt;, that defines a file in which the versions are already specified.&lt;/p&gt;

&lt;p&gt;Create a file called &lt;code&gt;version.txt&lt;/code&gt; and put the following contents in it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KUBE_GIT_COMMIT=abcd
KUBE_GIT_TREE_STATE="clean"
KUBE_GIT_VERSION="v2.0.3"
KUBE_GIT_MAJOR=2
KUBE_GIT_MINOR=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now get that file as an environment variable with &lt;code&gt;export KUBE_GIT_VERSION_FILE=version.txt&lt;/code&gt; and run &lt;code&gt;bazel build //cmd/kubeadm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then when you run &lt;code&gt;kubeadm version&lt;/code&gt; you will see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubeadm version
kubeadm version: &amp;amp;version.Info{Major:"2", Minor:"0", GitVersion:"v2.0.3", GitCommit:"abcd", GitTreeState:"clean", BuildDate:"2018-11-19T20:58:25Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>versions</category>
      <category>bazel</category>
      <category>bash</category>
    </item>
    <item>
      <title>Anki 2.1.x Add-ons</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Sat, 03 Nov 2018 01:38:25 +0000</pubDate>
      <link>https://dev.to/chuck_ha/anki-21x-add-ons-176c</link>
      <guid>https://dev.to/chuck_ha/anki-21x-add-ons-176c</guid>
      <description>&lt;p&gt;Installing Anki add-ons is not too bad, but it's slightly different between versions 2.0.x and 2.1.x. &lt;/p&gt;

&lt;p&gt;For background, I started here with this &lt;a href="http://massimmersionapproach.com/table-of-contents/anki/low-key-anki/low-key-anki-summary-and-installation/" rel="noopener noreferrer"&gt;Low Key Anki&lt;/a&gt; guide and wanted to install one of the provided plugins (ResetEZ.py) but it didn't work out of the box.&lt;/p&gt;

&lt;p&gt;Add-ons, in OS X, are found in &lt;code&gt;$HOME/Library/Application Support/Anki2/addons21&lt;/code&gt;. Make a directory, I named mine &lt;code&gt;resetez&lt;/code&gt;. Inside, add a file named &lt;code&gt;__init__.py&lt;/code&gt;. Take the contents of &lt;code&gt;ResetEZ.py&lt;/code&gt; found on the link above (maybe a page or two ahead) and put it as the contents of &lt;code&gt;__init__.py&lt;/code&gt;. Read the instructions and configure. Restart Anki and you should see a new menu item under Tools!&lt;/p&gt;

</description>
      <category>configuration</category>
      <category>anki</category>
      <category>study</category>
    </item>
    <item>
      <title>Customized Kubespray Deployment</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Wed, 24 Oct 2018 15:23:06 +0000</pubDate>
      <link>https://dev.to/chuck_ha/customized-kubespray-deployment-4e71</link>
      <guid>https://dev.to/chuck_ha/customized-kubespray-deployment-4e71</guid>
      <description>&lt;h1&gt;
  
  
  Versions
&lt;/h1&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;kubespray&lt;/th&gt;
&lt;th&gt;ansible&lt;/th&gt;
&lt;th&gt;python&lt;/th&gt;
&lt;th&gt;terraform&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/bartlaarhoven/kubespray"&gt;this fork&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2.7.0&lt;/td&gt;
&lt;td&gt;3.6.1&lt;/td&gt;
&lt;td&gt;0.11.8&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;⚠️ Warning! Use this fork: &lt;a href="https://github.com/kubernetes-incubator/kubespray/pull/3486"&gt;https://github.com/kubernetes-incubator/kubespray/pull/3486&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Motivation
&lt;/h1&gt;

&lt;p&gt;This post serves as documentation for creating a kubernetes cluster from nothing using kubespray on aws with ubuntu images behind a bastion host. I could not find all the documentation that puts all of this together and wanted to write it down for myself the next time I need to do this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone kubespray and set up some default files&lt;/li&gt;
&lt;/ol&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/bartlaarhoven/kubespray
&lt;span class="nb"&gt;cd &lt;/span&gt;kubespray
virtualenv ks &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; ks/bin/activate
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-Rp&lt;/span&gt; inventory/sample/ inventory/mycluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create an IAM user with admin privileges in some account (TODO probably scope this down?)&lt;/li&gt;
&lt;li&gt;Create an EC2 key pair&lt;/li&gt;
&lt;li&gt;Copy the &lt;a href="https://github.com/kubernetes-incubator/kubespray/blob/7e84de2ae116f624b570eadc28022e924bd273bc/contrib/terraform/aws/credentials.tfvars.example"&gt;terraform environment file&lt;/a&gt; to &lt;code&gt;credentials.tfvars&lt;/code&gt; and modify it with the user's key and secret along with the ssh key pair name and the region you'd like the infrastructure to exist in.&lt;/li&gt;
&lt;li&gt;Customize the &lt;a href="https://github.com/kubernetes-incubator/kubespray/blob/7e84de2ae116f624b570eadc28022e924bd273bc/contrib/terraform/aws/terraform.tfvars"&gt;terraform file&lt;/a&gt; with the architecture you'd like, I used 1 master, 1 worker, 1 etcd and left bastions as default. Also modify the inventory file to be &lt;code&gt;../../../inventory/mycluster/hosts.ini&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Modify the &lt;a href="https://github.com/kubernetes-incubator/kubespray/blob/7e84de2ae116f624b570eadc28022e924bd273bc/contrib/terraform/aws/variables.tf#L23"&gt;variables.tf&lt;/a&gt; to be
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;    &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"distro"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

      &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;

        &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;

        &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run terraform
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    terraform apply &lt;span class="nt"&gt;--var-file&lt;/span&gt; credentials.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Modify the &lt;a href="https://github.com/kubernetes-incubator/kubespray/blob/7e84de2ae116f624b570eadc28022e924bd273bc/ansible.cfg"&gt;ansible.cfg&lt;/a&gt; file to use a bastion host by changing the &lt;code&gt;ssh_args&lt;/code&gt; value to
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ssh_args = -F ssh-bastion.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Modify the hosts.ini file. Use the internal DNS names as the &lt;code&gt;ansible_host&lt;/code&gt; for each of the instances in the private subnet and include the &lt;code&gt;ansible_user&lt;/code&gt; to be &lt;code&gt;ubuntu&lt;/code&gt;, for example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    kubernetes-devtest-master0 ansible_host=ip-10-250-205-127.us-west-2.compute.internal ansible_user=ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Modify the bastion lines to be the &lt;em&gt;public&lt;/em&gt; DNS names as the &lt;code&gt;ansible_host&lt;/code&gt; and include the &lt;code&gt;ansible_user&lt;/code&gt; to be &lt;code&gt;ubuntu&lt;/code&gt;, for example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    bastion-0 ansible_host=ec2-22-222-22-22.us-east-2.compute.amazonaws.com ansible_user=ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run ansible-playbook
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ansible-playbook -i ./inventory/mycluster/hosts.ini ./cluster.yml -b --become-user=root --flush-cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you'd like more logs add &lt;code&gt;-v&lt;/code&gt; or &lt;code&gt;-vv&lt;/code&gt; up to &lt;code&gt;-vvvvv&lt;/code&gt;. I also like to pipe this to tee and write the logs to disk or inspection later in case of failure.&lt;/p&gt;

&lt;p&gt;Enjoy your new cluster~!&lt;/p&gt;

&lt;p&gt;P.S. if anyone knows how to get the code as part of the item so the bulleted stuff works please comment, I'd love to fix the numbering.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>kubespray</category>
    </item>
    <item>
      <title>Running dev.to in a container Part 2</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Wed, 15 Aug 2018 04:12:51 +0000</pubDate>
      <link>https://dev.to/chuck_ha/running-devto-in-a-container-part-2-l77</link>
      <guid>https://dev.to/chuck_ha/running-devto-in-a-container-part-2-l77</guid>
      <description>

&lt;p&gt;In my last post I covered building a docker image, but it was so rushed I won't even link it here.&lt;/p&gt;

&lt;p&gt;This article aims to improve on that mess of a Dockerfile. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/chuckha/dev.to/commit/db3dc2fefd7f3adba16ad1532c78049d09be4336"&gt;commit&lt;/a&gt; can be found on my fork of the dev.to repo.&lt;/p&gt;

&lt;p&gt;In order to get this version of dev.to running locally follow these steps (no cloning necessary):&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setup a data directory for postgres&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;data

&lt;span class="c"&gt;# Create the network that is shared between containers&lt;/span&gt;
docker network create devto

&lt;span class="c"&gt;# Run the database&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; devto &lt;span class="nt"&gt;--name&lt;/span&gt; db &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devto &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devto &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PracticalDeveloper_development &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;$(&lt;/span&gt;PWD&lt;span class="k"&gt;)&lt;/span&gt;/data:/var/lib/postgresql/data postgres:10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and then in another terminal follow these steps for the application server:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setup the database and ensure dependencies are met&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; devto &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;CONNECT_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgresql://devto:devto@db:5432/PracticalDeveloper_development chuckdha/dev.to:latest bin/setup

&lt;span class="c"&gt;# Run the webserver&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; devto &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;CONNECT_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;30 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgresql://devto:devto@db:5432/PracticalDeveloper_development chuckdha/dev.to:latest bin/rails s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then visit &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt; in a web browser.&lt;/p&gt;

&lt;p&gt;Let me know what you think!&lt;/p&gt;


</description>
      <category>container</category>
      <category>docker</category>
      <category>devto</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Running dev.to in a container</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Mon, 13 Aug 2018 04:21:03 +0000</pubDate>
      <link>https://dev.to/chuck_ha/running-devto-in-a-container-3c15</link>
      <guid>https://dev.to/chuck_ha/running-devto-in-a-container-3c15</guid>
      <description>

&lt;h1&gt;NEW POST, BETTER UX!&lt;/h1&gt;


&lt;div class="ltag__link"&gt;
        &lt;a href="/chuck_ha" class="ltag__link__link"&gt;
          &lt;div class="ltag__link__pic"&gt;
            &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aTsAK_9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--0V98nfMF--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://thepracticaldev.s3.amazonaws.com/uploads/user/profile_image/77874/7bb4d021-a2f1-4f01-be94-9ef683f024b2.jpg" alt="chuck_ha image"&gt;
          &lt;/div&gt;&lt;/a&gt;
          &lt;a href="/chuck_ha/running-devto-in-a-container-part-2-l77" class="ltag__link__link"&gt;
            &lt;div class="ltag__link__content"&gt;
              &lt;h2&gt;Running dev.to in a container Part 2&lt;/h2&gt;
              &lt;h3&gt;Chuck Ha&lt;/h3&gt;
              &lt;div class="ltag__link__taglist"&gt;
&lt;span class="ltag__link__tag"&gt;#container&lt;/span&gt;&lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;&lt;span class="ltag__link__tag"&gt;#devto&lt;/span&gt;&lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
&lt;/div&gt;
            &lt;/div&gt;
        &lt;/a&gt;
      &lt;/div&gt;


&lt;h1&gt;Original post below here&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/thepracticaldev/dev.to"&gt;Dev.to is open source&lt;/a&gt;! Awesome! Getting the application running is relatively easy, so props to the dev.to team for good documentation and a clean rails project. &lt;/p&gt;

&lt;p&gt;This article walks you through how to get a development version of dev.to running in a container.&lt;/p&gt;

&lt;h2&gt;The architecture&lt;/h2&gt;

&lt;p&gt;dev.to requires a web server and a database to run in development mode. It may require more services for production, but that's out of scope for now.&lt;/p&gt;

&lt;h3&gt;The database&lt;/h3&gt;

&lt;p&gt;The database needs to exist first as it is a dependency of the app. The database container and the app container will live in the same network so they can communicate over tcp.&lt;/p&gt;

&lt;p&gt;The database state will live on disk on our host machine (my laptop in this case). I like to mount a data directory into the postgres container so that I have some persistence when my container restarts. &lt;a href="https://hub.docker.com/_/postgres/"&gt;Here is some optional reading&lt;/a&gt; that might make this next part a little more clear.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create the data directory&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;data

&lt;span class="c"&gt;# create the docker network that will be shared by db and app containers&lt;/span&gt;
docker network create devto

&lt;span class="c"&gt;# launch postgres&lt;/span&gt;
&lt;span class="c"&gt;#   `--rm` removes the container when it stops running. This just helps us clean up&lt;/span&gt;
&lt;span class="c"&gt;#   `--network devto` connects this container to our created network&lt;/span&gt;
&lt;span class="c"&gt;#   `--name db` names the container and is shown in the output of `docker ps`&lt;/span&gt;
&lt;span class="c"&gt;#   `-e POSTGRES_PASSWORD=devto` sets an environment variable which will set the password we will use to connect to postgres&lt;/span&gt;
&lt;span class="c"&gt;#   `-e POSTGRES_USER=devto` sets the env var which will define the user that will connect to postgres&lt;/span&gt;
&lt;span class="c"&gt;#   `-e POSTGRES_DB=PracticalDeveloper_development` sets the env var that defines the default database to create&lt;/span&gt;
&lt;span class="c"&gt;#   `-v $(PWD)/data:/var/lib/postgresql/data` mounts the data directory we created above into the postgres container where all the data will live&lt;/span&gt;
&lt;span class="c"&gt;#   `postgres:10` runs postgres using the latest stable 10 release (example: v10.5)&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; devto &lt;span class="nt"&gt;--name&lt;/span&gt; db &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devto &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;devto &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;PracticalDeveloper_development &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;$(&lt;/span&gt;PWD&lt;span class="k"&gt;)&lt;/span&gt;/data:/var/lib/postgresql/data postgres:10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;The web application&lt;/h3&gt;

&lt;p&gt;We have to modify the code a little bit to get this working with the following Dockerfile. Here are the changes I made:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;gem "tzinfo-data"&lt;/code&gt; to the &lt;code&gt;Gemfile&lt;/code&gt; (I think this is ubuntu related, not 100% sure yet)&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;url: &amp;lt;%= ENV['DATABASE_URL'] %&amp;gt;&lt;/code&gt; in the default database configuration in &lt;code&gt;config/database.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Comment out &lt;code&gt;host: localhost&lt;/code&gt; in the same file under the test configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After I finished this work I found out there is both an &lt;a href="https://github.com/thepracticaldev/dev.to/issues/299"&gt;issue&lt;/a&gt; and a &lt;a href="https://github.com/thepracticaldev/dev.to/pull/296"&gt;WIP pull request&lt;/a&gt; that already exist. The approach presented in this article is a first pass and needs clean up, but I tried to make it clear about what is going on.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:18.04

ADD . /root/dev.to
WORKDIR /root/dev.to/

# Set up to install ruby
RUN apt update &amp;amp;&amp;amp; apt install -y autoconf bison build-essential libssl-dev libyaml-dev libreadline-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm5 libgdbm-dev

# This uhh helps when you run the container in interactive mode
RUN echo 'export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH' &amp;gt;&amp;gt; ~/.bashrc

# install rbenv-installer
RUN apt install -y curl git &amp;amp;&amp;amp; \
    export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH &amp;amp;&amp;amp; \
    curl -fsSL https://github.com/rbenv/rbenv-installer/raw/master/bin/rbenv-installer | bash

# install rbenv
RUN export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH &amp;amp;&amp;amp; \
    rbenv install &amp;amp;&amp;amp; \
    echo 'eval "$(rbenv init -)"' &amp;gt;&amp;gt; ~/.bashrc

# Install gems and yarn
RUN export PATH=/root/.rbenv/bin:/root/.rbenv/shims:$PATH &amp;amp;&amp;amp; \
    gem install bundler &amp;amp;&amp;amp; \
    gem install foreman &amp;amp;&amp;amp; \
    curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - &amp;amp;&amp;amp; \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list &amp;amp;&amp;amp; \
    apt-get update &amp;amp;&amp;amp;\
    apt install -y yarn libpq-dev &amp;amp;&amp;amp; \
    bundle install &amp;amp;&amp;amp; \
    bin/yarn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Modify this command below with the correct Algolia keys/app id then build and run the docker image.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; dev.to:latest

&lt;span class="c"&gt;# setting the various timeouts to large numbers 10000 since the docker version of this app and database tend to be *extremely* slow.&lt;/span&gt;
&lt;span class="c"&gt;# -p 3000:3000 exposes the port 3000 on the container to the host's port 3000. This lets us access our dev environment on our laptop at http://localhost:3000.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--network&lt;/span&gt; devto &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;RACK_TIMEOUT_WAIT_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;RACK_TIMEOUT_SERVICE_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;STATEMENT_TIMEOUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10000 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ALGOLIASEARCH_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;yourkey &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ALGOLIASEARCH_APPLICATION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;yourid &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ALGOLIASEARCH_SEARCH_ONLY_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;yourotherkey &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgresql://devto:devto@db:5432/PracticalDeveloper_development dev.to:latest /bin/bash

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; bin/setup
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; bin/rails server
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then open up &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt; your laptop.&lt;/p&gt;

&lt;p&gt;If you have trouble, please leave a comment and I'll update this post!&lt;/p&gt;

&lt;p&gt;This is part 1 of a series, getting dev.to running on Kubernetes. Stay tuned for the next article!&lt;/p&gt;


</description>
      <category>docker</category>
      <category>container</category>
      <category>devto</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Logging approaches in Go</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Mon, 23 Jul 2018 16:56:21 +0000</pubDate>
      <link>https://dev.to/chuck_ha/logging-approaches-in-go-58dp</link>
      <guid>https://dev.to/chuck_ha/logging-approaches-in-go-58dp</guid>
      <description>&lt;p&gt;During a &lt;a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/"&gt;kubeadm&lt;/a&gt; clean up cycle I found a neat feature of the &lt;a href="https://github.com/golang/glog"&gt;glog&lt;/a&gt; library that made me think about various approaches to logging.&lt;/p&gt;

&lt;p&gt;This flag is what set off my thought process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I really needed this feature for the ticket I was working on, but it's not a feature I've seen in other logging libraries. &lt;code&gt;glog&lt;/code&gt; takes the stance that logging should be global. You import the &lt;code&gt;glog&lt;/code&gt; library and log whatever lines you need wherever you want. You don't configure &lt;code&gt;glog&lt;/code&gt; with code, you configure it at runtime with flags. That's why glog requires you to run &lt;code&gt;flag.Parse()&lt;/code&gt; before using it. &lt;code&gt;glog&lt;/code&gt; gets the job done quickly as a global logger.&lt;/p&gt;

&lt;p&gt;On the opposite end of the spectrum, you have logging as a proper dependency. The logger is a parameter to your function or exists on the struct. Either way, it's configurable via code which makes writing unit tests a lot easier. You don't have to resort to &lt;a href="https://blog.golang.org/examples"&gt;Go's testable examples&lt;/a&gt; to make sure your application is logging correctly. This approach is good when you rely on the output to be a certain way or depend on logs for more than just debug output. Check out &lt;a href="https://github.com/sirupsen/logrus"&gt;logrus&lt;/a&gt; for a nice library that is easily wrapped to be a proper logging dependency.&lt;/p&gt;

&lt;p&gt;As is the answer of almost every tech question, choose the method that works for the project at hand. Likely you'll end up somewhere in the middle and use both types of logging and that's ok. The only important thing is to treat each log type as it should be treated. So go ahead and skip unit testing of global loggers and don't forget to build in extra time for a well tested library by injecting the logging dependency.&lt;/p&gt;

</description>
      <category>go</category>
      <category>logging</category>
    </item>
    <item>
      <title>Understanding Kubernetes through a concrete example</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Fri, 13 Jul 2018 03:29:41 +0000</pubDate>
      <link>https://dev.to/chuck_ha/understand-kubernetes-through-a-concrete-example-7d9</link>
      <guid>https://dev.to/chuck_ha/understand-kubernetes-through-a-concrete-example-7d9</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Date: July 12th, 2018
Kubernetes Version: v1.11.0
📝 is a sidebar
⚠️ is a warning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes is a container orchestration system. You can think of a container as an application and multiple containers as a pod. The orchestration part means you tell Kubernetes what containers you want running and it will take care of actually running the containers in your cluster, routing traffic to the correct pods, and &lt;a href="https://kubernetes.io/docs/concepts/"&gt;many other features&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Kubernetes provides a schema for your pod definitions. This means you fill in a well documented template and give it to the Kubernetes cluster. Kubernetes then figures out what will run where, runs your pods and configures the cluster network. If a pod crashes, Kubernetes will notice the system is not in the correct state anymore. Kubernetes will act to return the system to the defined state.&lt;/p&gt;

&lt;p&gt;Let's look at an example: A blogging platform called devtoo.com on a Kubernetes cluster. &lt;/p&gt;

&lt;h2&gt;
  
  
  Components
&lt;/h2&gt;

&lt;p&gt;The first step is to figure out the components of devtoo.com. Let's say these are all the components necessary:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A web server that accepts HTTP traffic from the internet. Examples of web servers include &lt;a href="https://www.nginx.com/"&gt;nginx&lt;/a&gt; and &lt;a href="https://httpd.apache.org/"&gt;apache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;An application server that loads the rails app into memory and serves requests. This would be the rails application that powers devtoo.com.&lt;/li&gt;
&lt;li&gt;A database to store all of our awesome posts. &lt;a href="https://www.postgresql.org/"&gt;Postgres&lt;/a&gt;, &lt;a href="https://www.mysql.com/"&gt;mysql&lt;/a&gt; and &lt;a href="https://www.mongodb.com/"&gt;MongoDB&lt;/a&gt; are all database examples.&lt;/li&gt;
&lt;li&gt;A cache to bypass the application and database and immediately return a result. Examples of caches include &lt;a href="https://redis.io/"&gt;redis&lt;/a&gt; and &lt;a href="https://memcached.org/"&gt;memcached&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The end goal
&lt;/h2&gt;

&lt;p&gt;The next step is to figure out what the final system should look like. Kubernetes gives you a lot of choice here. The components could each run in their own pod or they could all be put into one pod. I like to start at the simplest place and then fix the solution if it sucks. To me, that means each component will be run in its own pod. A typical web request will enter the system and hit the web server. The web server will ask the cache if it has a result for that endpoint. If it does, the result is returned immediately. If it does not, the request is passed on to the application server. The application server is configured to talk to the database and generate dynamic content which gets sent back to the web browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining the system
&lt;/h2&gt;

&lt;p&gt;Kubernetes maps services to pods. There will be one service for each pod. This will allow you to reference other pods with DNS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.hub.docker.com/library/nginx:1.15&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service defines a selector, &lt;code&gt;app: web&lt;/code&gt;. The service will route traffic to any pod that matches that selector. If you look at the pod definition you will see that there is an &lt;code&gt;app: web&lt;/code&gt; label defined on the pod. That means traffic comes into the service on port 80 and gets sent to the nginx pod on the &lt;code&gt;targetPort&lt;/code&gt;, also 80 in this case. The &lt;code&gt;targetPort&lt;/code&gt; and &lt;code&gt;containerPort&lt;/code&gt; must match.&lt;/p&gt;

&lt;p&gt;Here, you use your magic wand and produce an nginx config that is embedded in the nginx image that sends traffic to the cache and then if there is no result to the app server.&lt;/p&gt;

&lt;p&gt;Here is the cache definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.hub.docker.com/library/redis:4.0&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the database definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.hub.docker.com/library/postgres:10.4&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those are all of the dependencies that were considered for this deployment of devtoo.com. Next the application itself must be configured. &lt;a href="http://guides.rubyonrails.org/configuring.html#configuring-a-database"&gt;Rails can use an environment variable to connect to a database&lt;/a&gt;. You could define that in the pod YAML like this:&lt;/p&gt;

&lt;p&gt;⚠️This is super insecure! Kubernetes has much better ways to do this but I'm omitting them to keep the scope of this post "small".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;devtoo-com&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DATABASE_URL&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql://user1:password1@database/dev_to_db&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.hub.docker.com/devtoo.com/app:v9001&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3001&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;devtoo-com&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3001&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3001&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last piece needed is an ingress point, a place where traffic can enter the cluster from the outside world.&lt;/p&gt;

&lt;p&gt;📝I'm glossing over IngressControllers because, while required, they are an implementation detail to be ignored at this level of understanding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-to&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
    &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This says that any traffic received at this ingress point will be sent to the service with a name of &lt;code&gt;web&lt;/code&gt; on port 80.&lt;/p&gt;

&lt;p&gt;Now your cluster is set up, let's trace a packet to get this blogpost. You enter &lt;a href="http://devtoo.com/chuck_ha/this-post"&gt;http://devtoo.com/chuck_ha/this-post&lt;/a&gt; into your browser. &lt;a href="http://devtoo.com"&gt;http://devtoo.com&lt;/a&gt; resolves in DNS to some IP address which is a load balancer in front of your kubernetes cluster. The load balancer sends the traffic to your ingress point. Since there is only one service on the ingress, the traffic is then sent to the web service which is mapped to the nginx pod. The nginx container inspects the packet and sends it to the cache service which is mapped to the redis pod. The redis pod has never seen this URL before so execution continues from nginx. The request is sent to the application server where this page is generated, cached and returned to your web browser.&lt;/p&gt;

&lt;p&gt;Then you click on the 🦄 button! &lt;/p&gt;

&lt;p&gt;📝 A list of things I skipped so you could focus on the meat and not get lost in the details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;Building your app container&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/"&gt;Security&lt;/a&gt;, including &lt;a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/"&gt;TLS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Ingress for the real world&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;Workload management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>programming</category>
    </item>
    <item>
      <title>Can we see analytics for our posts anywhere?</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Mon, 09 Jul 2018 17:15:28 +0000</pubDate>
      <link>https://dev.to/chuck_ha/can-we-see-analytics-for-our-posts-anywhere-232p</link>
      <guid>https://dev.to/chuck_ha/can-we-see-analytics-for-our-posts-anywhere-232p</guid>
      <description>&lt;p&gt;I am assuming I can't embed a GA snippet into my post...is it possible to have any insight into the views a post is getting?&lt;/p&gt;

</description>
      <category>help</category>
    </item>
    <item>
      <title>Reading Kubernetes logs</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Sun, 08 Jul 2018 17:32:15 +0000</pubDate>
      <link>https://dev.to/chuck_ha/reading-kubernetes-logs-315k</link>
      <guid>https://dev.to/chuck_ha/reading-kubernetes-logs-315k</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Date: July 8th, 2018
Kubernetes Version: v1.11.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reading logs is part of an essential toolkit that is needed to debug a Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes logging
&lt;/h2&gt;

&lt;p&gt;There are generally only two methods needed to find Kubernetes logs, systemd and docker. You will have to identify what is managing the service you're interested in and then know how to extract logs from that manager. For example, you need to know how to read systemd logs if the kubelet is managed by systemd.&lt;/p&gt;

&lt;h3&gt;
  
  
  systemd logs
&lt;/h3&gt;

&lt;p&gt;Logs for services running on &lt;a href="https://www.freedesktop.org/wiki/Software/systemd/"&gt;systemd&lt;/a&gt; can be viewed with &lt;a href="http://0pointer.de/blog/projects/journalctl.html"&gt;journalctl&lt;/a&gt;. Here is an example reading logs of the kubelet, a service generally run through systemd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl --unit kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using a different service manager, please consult the documentation for how to extract logs for your particular service manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker logs
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/overview/components/#master-components"&gt;Control plane components&lt;/a&gt; can be managed by the kubelet using &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/"&gt;static pods&lt;/a&gt;. You can get their logs through &lt;code&gt;kubectl logs &amp;lt;podname&amp;gt;&lt;/code&gt; or if you are on the node where the static pod is running you can access the docker logs directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker logs &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is good for cases when the kube-apiserver is not working and therefore &lt;code&gt;kubectl&lt;/code&gt; commands are not working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning to read logs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Get your bearings
&lt;/h3&gt;

&lt;p&gt;Get your bearings by reading logs from a fully functional Kubernetes cluster. You'll get a glimpse into what the component is doing as well as familiarize yourself with log lines that can be ignored which will help you notice lines that might contain useful debugging information. &lt;/p&gt;

&lt;p&gt;Don't be afraid to dig into the source code and see where each log line is coming from. &lt;a href="https://cs.k8s.io"&gt;Kubernetes code search&lt;/a&gt; is really good for this. Copy and paste parts of the log line until you get &lt;a href="https://cs.k8s.io/?q=Unable%20to%20create%20storage%20backend&amp;amp;i=nope&amp;amp;files=&amp;amp;repos=kubernetes"&gt;some hits&lt;/a&gt; and read the surrounding code. You could even use these log lines as the starting point for the strategies I outline in my &lt;a href="https://dev.to/chuck_ha/learning-the-kubernetes-codebase-1324"&gt;Learning the Kubernetes codebase&lt;/a&gt; post to gain a very deep understanding of the component.&lt;/p&gt;

&lt;p&gt;Once you're comfortable reading through logs of a working Kubernetes cluster it's time to break things. Turn off etcd and watch the logs. Turn it back on and see how the component responds. Do that with other components and watch what happens. You will start to understand which components communicate to other components and it will help you identify failures from symptoms faster.&lt;/p&gt;

&lt;p&gt;The three most common components you will look at are the kubelet, the kube-apiserver and etcd. I'd suggest focusing on those, but do look at logs from all the components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples
&lt;/h3&gt;

&lt;p&gt;These examples come from a &lt;a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/"&gt;kubeadm cluster&lt;/a&gt;. The kubelet is managed by systemd and all control plane components are managed by the kubelet as static pods.&lt;/p&gt;

&lt;h1&gt;
  
  
  kubelet logs
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Jul 08 16:32:51 ip-10-0-25-163 kubelet[20747]: E0708 16:32:51.415697   20747 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "ip-10-0-25-163.us-west-2.compute.internal": Get https://10.0.25.163:6443/api/v1/nodes/ip-10-0-25-163.us-west-2.compute.internal?resourceVersion=0&amp;amp;timeout=10s: context deadline exceeded                                                          
Jul 08 16:33:01 ip-10-0-25-163 kubelet[20747]: W0708 16:33:01.416933   20747 reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: watch of *v1.Service ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Unexpected watch close - watch lasted less than a second and no items received                                                                                           
Jul 08 16:33:01 ip-10-0-25-163 kubelet[20747]: W0708 16:33:01.417001   20747 reflector.go:341] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Unexpected watch close - watch lasted less than a second and no items received                                                                               
Jul 08 16:33:01 ip-10-0-25-163 kubelet[20747]: W0708 16:33:01.417031   20747 reflector.go:341] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: watch of *v1.Node ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Unexpected watch close - watch lasted less than a second and no items received                                                                                              
Jul 08 16:33:01 ip-10-0-25-163 kubelet[20747]: E0708 16:33:01.417105   20747 mirror_client.go:88] Failed deleting a mirror pod "etcd-ip-10-0-25-163.us-west-2.compute.internal_kube-system": Delete https://10.0.25.163:6443/api/v1/namespaces/kube-system/pods/etcd-ip-10-0-25-163.us-west-2.compute.internal: read tcp 10.0.25.163:36190-&amp;gt;10.0.25.163:6443: use of closed network connection; some request body already written
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The errors and warnings here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deleting a mirror pod failed&lt;/li&gt;
&lt;li&gt;watches fail &lt;/li&gt;
&lt;li&gt;updating node status fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These errors together indicate either etcd or the kube-apiserver is not working. If you know how the kubelet and the kube-apiserver interact, you would even be able to directly tell this is a problem with etcd and not a problem with the kube-apiserver.&lt;/p&gt;

&lt;p&gt;But in the interest of examples, let's go look at the kube-apiserver logs.&lt;/p&gt;

&lt;h1&gt;
  
  
  kube-apiserver logs
&lt;/h1&gt;

&lt;p&gt;The kube-apiserver is currently crashing and restarting over and over. The logs look like this before the kube-apiserver is restarted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0708 16:38:26.448081       1 server.go:703] external host was not specified, using 10.0.25.163
I0708 16:38:26.448218       1 server.go:145] Version: v1.11.0
I0708 16:38:26.775022       1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0708 16:38:26.775040       1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0708 16:38:26.775722       1 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0708 16:38:26.775737       1 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
F0708 16:38:36.778177       1 storage_decorator.go:57] Unable to create storage backend: config (&amp;amp;{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc42029ce80 &amp;lt;nil&amp;gt; 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last failure line is the most important. This tells us that the kube-apiserver is not able to connect to etcd because the connection was refused.&lt;/p&gt;

&lt;p&gt;If we restart etcd, everything comes back up and all components are happy again.&lt;/p&gt;

&lt;h1&gt;
  
  
  etcd logs
&lt;/h1&gt;

&lt;p&gt;There are generally no logs for normal behavior, unlike the kubelet which likes to tell you what it's doing constantly even when everything is fine.&lt;/p&gt;

&lt;p&gt;Here are some example etcd logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2018-07-08 16:42:20.351537 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2018-07-08 16:42:21.039713 I | raft: 8e9e05c52164694d is starting a new election at term 4
2018-07-08 16:42:21.039745 I | raft: 8e9e05c52164694d became candidate at term 5
2018-07-08 16:42:21.039777 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 5
2018-07-08 16:42:21.039792 I | raft: 8e9e05c52164694d became leader at term 5
2018-07-08 16:42:21.039803 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 5
2018-07-08 16:42:21.040764 I | etcdserver: published {Name:ip-10-0-25-163.us-west-2.compute.internal ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2018-07-08 16:42:21.041029 I | embed: ready to serve client requests
2018-07-08 16:42:21.041449 I | embed: serving client requests on 127.0.0.1:2379
WARNING: 2018/07/08 16:42:21 Failed to dial 127.0.0.1:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final warning is a bug in a particular version of etcd but has no impact on the functionality. This is the &lt;a href="https://github.com/coreos/etcd/issues/8603"&gt;github issue for that bug&lt;/a&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  everything else
&lt;/h1&gt;

&lt;p&gt;Take a look at all the other components while you're there and you'll start to build both an understanding of each component and how to recognize when something is wrong.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>logs</category>
      <category>learning</category>
    </item>
    <item>
      <title>Learning the Kubernetes codebase</title>
      <dc:creator>Chuck Ha</dc:creator>
      <pubDate>Thu, 05 Jul 2018 22:27:15 +0000</pubDate>
      <link>https://dev.to/chuck_ha/learning-the-kubernetes-codebase-1324</link>
      <guid>https://dev.to/chuck_ha/learning-the-kubernetes-codebase-1324</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Date: July 5th, 2018
Most recent Kubernetes release: 1.11.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes is a big piece of software clocking in just shy of &lt;a href="https://www.openhub.net/p/kubernetes"&gt;2 million lines of code&lt;/a&gt;. It can be intimidating to get started, but once you get the hang of it, you'll be amazed at how many tickets start to make sense and how you can quickly identify and fix bugs!&lt;/p&gt;

&lt;h2&gt;
  
  
  The very beginning
&lt;/h2&gt;

&lt;p&gt;If you are just starting your Kubernetes journey, I would encourage you to read &lt;a href="https://kubernetes.io/docs/concepts/overview/components/"&gt;this documentation&lt;/a&gt; and get a high level understanding of the major components. To call something a Kubernetes cluster it needs most of the components listed there running on at least one node. Conveniently, all of the code for the components lives in the &lt;a href="https://github.com/kubernetes/kubernetes"&gt;Kubernetes monorepo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first step
&lt;/h2&gt;

&lt;p&gt;Now that you know what each component does, you can pick a component that sounds interesting to you. If you're having trouble picking a component, one thing I like to do is pick the thing that is farthest outside of my comfort zone. For example, if networking is a weak spot of yours, pick the kube-proxy. If you've got the network chops but don't really do web servers, pick the kube-apiserver.&lt;/p&gt;

&lt;h2&gt;
  
  
  One foot after the other
&lt;/h2&gt;

&lt;p&gt;One tool every coder needs is a good code editor. The feature that helps me the most when reading code is jump to definition. I use &lt;a href="https://code.visualstudio.com/"&gt;VSCode&lt;/a&gt; with the typical Go plugins and that fulfills that requirement to my satisfaction. You could use vim, Goland, or emacs, just so long as you are comfortable with the tool and you can jump between functions effectively, you'll be ready to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  One small step
&lt;/h2&gt;

&lt;p&gt;Kubernetes lives at &lt;a href="https://github.com/kubernetes/kubernetes"&gt;https://github.com/kubernetes/kubernetes&lt;/a&gt;. Usually, the import path for Go code that lives on github is the URL without the scheme (https://). So you might expect &lt;code&gt;go get -d github.com/kubernetes/kubernetes&lt;/code&gt; to work, but that's going to confuse your tools. The package name is actually &lt;code&gt;k8s.io/kubernetes&lt;/code&gt;. So go ahead and clone the repo using this command &lt;code&gt;go get -d k8s.io/kubernetes&lt;/code&gt;. If you don't have go installed, follow the instructions on &lt;a href="https://golang.org/"&gt;golang.org&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Another small step
&lt;/h2&gt;

&lt;p&gt;Open up the Kubernetes directory with your trusted code editor. Take a look inside &lt;code&gt;cmd/&lt;/code&gt;. This is where you find the &lt;em&gt;entry point&lt;/em&gt; to all of the components that live inside Kubernetes. Every component in Kubernetes is run as a command line program and the &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/"&gt;references&lt;/a&gt; are all online. Here is where the &lt;code&gt;kube-proxy&lt;/code&gt;'s &lt;a href="https://github.com/kubernetes/kubernetes/blob/release-1.11/cmd/kube-proxy/proxy.go"&gt;entry point is&lt;/a&gt;. This patter of &lt;code&gt;cmd/&amp;lt;component&amp;gt;/&amp;lt;component&amp;gt;.go&lt;/code&gt; is going to be pretty much the same across every component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Repeat forever
&lt;/h2&gt;

&lt;p&gt;Now start exploring the file! Chances are the first file you open under &lt;code&gt;cmd/&amp;lt;component&amp;gt;/&amp;lt;component&amp;gt;.go&lt;/code&gt; is going to be very small. Read through each line. Do you know what it does? If you go to the definition of the function call on &lt;a href="https://github.com/kubernetes/kubernetes/blob/release-1.11/cmd/kube-proxy/proxy.go#L38"&gt;this line&lt;/a&gt; you will realize how deep this code goes. You have a lot to read!&lt;/p&gt;

&lt;p&gt;Now do this until you are satisfied with your understanding. It will take a while and that's totally normal. This stuff is not easy to pick up. Heck, you could even get a post or two out about what you found useful, interesting rabbit holes, and things you found surprising about the Kubernetes code base!&lt;/p&gt;

&lt;p&gt;If you're trying to break into Kubernetes development, there's no better way to read lots of the code!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>programming</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
