<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jace</title>
    <description>The latest articles on DEV Community by Jace (@mjace).</description>
    <link>https://dev.to/mjace</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mjace"/>
    <language>en</language>
    <item>
      <title>The path to CKA, and some tips.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Sun, 12 Apr 2020 10:26:34 +0000</pubDate>
      <link>https://dev.to/mjace/the-path-to-cka-and-some-tips-1723</link>
      <guid>https://dev.to/mjace/the-path-to-cka-and-some-tips-1723</guid>
      <description>&lt;p&gt;Yes, I passed the CKA exam.👍&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--05H_X5Lg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/8w9EA5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--05H_X5Lg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/8w9EA5v.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I have already been working with kubernetes for yeas. The way I prepare CKA and the schedule I made might not suit for the beginners. But, still, some of the tips and resources can be shared.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/"&gt;Certified Kubernetes Administrator (CKA) with Practice Tests&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I highly recommend this course. It contains the sufficient lecture for the CKA test. Most of all, the hands-on labs have good coverage for the skill you need for the CKA test.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tasks tab in official k8s doc.
There are lots of hand-on task in the task tab of official document. And some of them is important for the test.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tips and advice
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A least know how to use tmux!&lt;br&gt;
Your test environment might have a chance to be &lt;strong&gt;extremely slow&lt;/strong&gt;.&lt;br&gt;
There was one question that took me a lot of time to wait for the command execution to be finished.&lt;br&gt;
At the first attempt, I wait for about 5 mins. But the command still stuck there. I have no choice but to kill that command and skip to another questions. When I finished the rest of all question, I still have about 15~20 minutes. So I typed the same correct command again and wait for the execution to be finished. But in the end, those 15~20 minutes is not enough.&lt;br&gt;
If I know the basic usage of tmux, I can keep that command being running, in a meanwhile, I can double check my answers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get familiar with &lt;code&gt;kubectl run&lt;/code&gt;&lt;br&gt;
Get familiar with this command, it can generate yaml templates for pods, depolyments. And save you lots of time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/scriptautomate/tips-for-the-certified-kubernetes-exams-cka-and-ckad-49mn"&gt;Tips for The Certified Kubernetes Exams: CKA and CKAD in 2020&lt;/a&gt;&lt;br&gt;
Special thanks for Derek who leaved a tips in my first article about preparing CKA.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>Cmd and Entrypoint in Docker container.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Sun, 19 Jan 2020 11:08:25 +0000</pubDate>
      <link>https://dev.to/mjace/cmd-and-entrypoint-in-docker-container-2ai8</link>
      <guid>https://dev.to/mjace/cmd-and-entrypoint-in-docker-container-2ai8</guid>
      <description>&lt;p&gt;For a long time, I can not tell the exact difference between &lt;code&gt;CMD&lt;/code&gt; and &lt;code&gt;ENTRYPOINT&lt;/code&gt; in Dockerfile.&lt;br&gt;
Until I have a chance to take a look for the definition.&lt;/p&gt;

&lt;p&gt;I will explain this base on few different scenarios.&lt;/p&gt;



&lt;p&gt;Let's say If I have a Ubuntu image-based container named ubuntu-sleeper to execute the &lt;code&gt;sleep&lt;/code&gt; for 5 seconds. And the Dockerfile showed as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu

CMD sleep 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I type &lt;code&gt;docker run ubuntu-sleeper&lt;/code&gt;, container will automatically execute the &lt;code&gt;sleep 5&lt;/code&gt; to sleep 5 seconds.&lt;/p&gt;

&lt;p&gt;But what if I like to customize the second to sleep?&lt;/p&gt;

&lt;p&gt;As we know, we can overwrite the default CMD by appending the command after the &lt;code&gt;docker run&lt;/code&gt;.&lt;br&gt;
For example, &lt;code&gt;docker run ubuntu-sleeper sleep 10&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But it looks bad, and we like make it look like this.&lt;br&gt;
&lt;code&gt;docker run ubuntu-sleeper 10&lt;/code&gt;&lt;br&gt;
Which only pass the second argument we need.&lt;/p&gt;

&lt;p&gt;To achieve this, we can use &lt;strong&gt;ENTRYPOINT&lt;/strong&gt;, to write our dockerfile..&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu

ENTRYPOINT ["sleep"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker will execute the command in ENTRYPOINT first, then the CMD.&lt;br&gt;
So, when we type &lt;code&gt;docker run ubuntu-sleeper 10&lt;/code&gt;.&lt;br&gt;
Docker will get the ENTRYPOINT &lt;code&gt;sleep&lt;/code&gt; and the &lt;code&gt;10&lt;/code&gt; we passed via run commmad.&lt;/p&gt;

&lt;p&gt;But what if I run this container without passing the CMD?&lt;br&gt;
For example, just run &lt;code&gt;docker run ubuntu-sleeper&lt;/code&gt;.&lt;br&gt;
As we know, this will make container only run &lt;code&gt;sleep&lt;/code&gt; without given argument. And you will get the error says the operand is missing.&lt;/p&gt;

&lt;p&gt;So how do you define the default value for this container?&lt;br&gt;
In this case, You can define ENTRYPOINT and CMD at a same time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu

ENTRYPOINT ["sleep"]
CMD ["5"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So when you simply run &lt;code&gt;docker run ubuntu-sleeper&lt;/code&gt;, the container will sleep 5 seconds by default.&lt;br&gt;
And you can change &lt;code&gt;CMD&lt;/code&gt; value by &lt;code&gt;docker run ubutnu-sleeper 20&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;PS: And you can even change the ENTRYPOINT with &lt;code&gt;--entrypoint&lt;/code&gt; option in &lt;code&gt;docker run&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --entrypoint sleep2.0 ubuntu-sleeper 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
      <category>container</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 5.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Sun, 05 Jan 2020 09:13:28 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-5-57po</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-5-57po</guid>
      <description>&lt;p&gt;After finished the &lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way"&gt;Kubernetes installation the hard way&lt;/a&gt;,&lt;br&gt;
I moved on the the &lt;strong&gt;CKA with Practice Tests&lt;/strong&gt; that I bought in udemy.&lt;/p&gt;

&lt;p&gt;This course contains lots of online hand-on labs, which improve the kubernetes troubleshooting skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  ETCD
&lt;/h2&gt;

&lt;p&gt;etcd is a key-value storage for all of the data about the whole information of a k8s cluster.&lt;/p&gt;

&lt;p&gt;And you can deploy etcd from by downloading a released binary and execute it.&lt;br&gt;
Or, if you install kubernetes cluster by &lt;code&gt;kubeadm&lt;/code&gt;, your etcd will run as container. For example, you can get the keys of your cluster by&lt;br&gt;
&lt;code&gt;kubectl exec etcd-master -n kube-system etcdctl get / --perfix -keys-only&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  kube-apiserver
&lt;/h2&gt;

&lt;p&gt;In kubernetes, there are some operation/component related with kube-apiserver. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authenticate User&lt;/li&gt;
&lt;li&gt;Validate Request&lt;/li&gt;
&lt;li&gt;Retrieve data&lt;/li&gt;
&lt;li&gt;Interactive with ETCD&lt;/li&gt;
&lt;li&gt;Scheduler&lt;/li&gt;
&lt;li&gt;Kubelet&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can see your kebe-apiserver configuration by checking &lt;code&gt;cat /etc/systemd/system/kube-apiserver.service&lt;/code&gt; or &lt;br&gt;
&lt;code&gt;cat /etc/kubernetes/manifests/kube-apiserver&lt;/code&gt; if your cluster is installed by &lt;code&gt;kubeadm&lt;/code&gt;&lt;br&gt;
Or you can search a running api-server process by &lt;br&gt;
&lt;code&gt;ps -aux | grep kube-apiserver&lt;/code&gt;, and you can see the all configurations.&lt;/p&gt;




&lt;h2&gt;
  
  
  kube-controller-manager
&lt;/h2&gt;

&lt;p&gt;When install kube-controller-manager there are some different controller will be installed. Eg, development controller, replication controller, etc.&lt;/p&gt;

&lt;p&gt;And if your cluster is install by &lt;code&gt;kubeadm&lt;/code&gt;, like other components, the kube-controller-manager will run as a pod in kube-system namespace on master node.&lt;br&gt;
For other non-kubeadm setups, the configurations can be checked in &lt;code&gt;/etc/systemd/system/kube-controller-manager.service&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  kube-scheduler
&lt;/h2&gt;

&lt;p&gt;In k8s, kube-scheduler will decides and only decides which pod goes to which worker node.&lt;br&gt;
By considering the pod's resources requirements, taints, node selectors and affinity, etc.&lt;/p&gt;

&lt;p&gt;Again, for a cluster set by &lt;code&gt;kubeadm&lt;/code&gt; the kube-scheduler configuration will be in &lt;code&gt;/etc/kubernetes/manifests/kube-scheduler.yaml&lt;/code&gt;.&lt;br&gt;
And it run as a pod in the kube-system namespace.&lt;/p&gt;




&lt;h2&gt;
  
  
  kubelet.
&lt;/h2&gt;

&lt;p&gt;kubelet is where the &lt;code&gt;dirty work&lt;/code&gt; happends in kubernetes cluster.&lt;br&gt;
It receives the command from the scheduler and to interact with the container run time to start or to delete a container.&lt;br&gt;
If you use kubeadm to install k8s cluster, &lt;strong&gt;it WILL NOT install kubelet&lt;/strong&gt;.&lt;br&gt;
&lt;strong&gt;You must download and install it by your own.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can search your kubelet process by &lt;br&gt;
&lt;code&gt;ps aux | grep kubelet&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  kube-proxy.
&lt;/h2&gt;

&lt;p&gt;kube-proxy will setup the iptables for the services in kubernetes.&lt;br&gt;
Since the service ip and the pod ip are in different subnet.&lt;br&gt;
So the kube-proxy's job is to monitor the services and the pod ips, and maintains the proper rules to forward the traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notes.
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create pod by &lt;code&gt;kubectl run&lt;/code&gt; commnad.&lt;br&gt;
&lt;code&gt;kubectl run nginx --image=nginx --restart=Never&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create deployment by &lt;code&gt;kubectl run&lt;/code&gt;&lt;br&gt;
&lt;code&gt;kubectl run nginx --image=nginx --restart=Always&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create job by &lt;code&gt;kubectl run&lt;/code&gt;&lt;br&gt;
&lt;code&gt;kubectl run nginx --image=nginx --restart=OnFailure&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>Ansible notes: group_names</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Mon, 30 Dec 2019 06:20:46 +0000</pubDate>
      <link>https://dev.to/mjace/ansible-notes-groupnames-2jb5</link>
      <guid>https://dev.to/mjace/ansible-notes-groupnames-2jb5</guid>
      <description>&lt;p&gt;In ansible, &lt;code&gt;group_names&lt;/code&gt; is a list (array) of all the groups the current host is in.&lt;br&gt;&lt;br&gt;
This can be used in templates using Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{% if 'webserver' in group_names %}
   # some part of a configuration file that only applies to webservers
{% endif %}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For example, If I must run a task for certain host in the group of hosts.&lt;/p&gt;

&lt;p&gt;And the inventory hosts file shown as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;node1  ip=192.168.1.1&lt;/span&gt;
&lt;span class="s"&gt;node2  ip=192.168.1.2&lt;/span&gt;

&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;delpy_infra&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;node1&lt;/span&gt;
&lt;span class="s"&gt;node2&lt;/span&gt;

&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;special&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;node2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And here's the task. &lt;br&gt;
You can use &lt;code&gt;group_names&lt;/code&gt; to specify the certain host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Speciall ops for special nodes&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/usr/bin/special.sh"&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'special'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;group_names&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
      <category>ansible</category>
      <category>devops</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 4. </title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Tue, 24 Dec 2019 14:49:32 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-4-1io6</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-4-1io6</guid>
      <description>&lt;p&gt;Last week I finished the course in the CKA bundle.&lt;br&gt;
It's time to move on to the kubernetes installation &lt;strong&gt;with the hard way&lt;/strong&gt;😎&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way"&gt;Kubernetes The Hard Way&lt;/a&gt; is a tutorial walks you through setting up Kubernetes step by step. Meaning without using kubeadm, kubespray or any other tools.&lt;/p&gt;

&lt;p&gt;The prerequisites of this tutorial contains GCP.&lt;br&gt;
Since I'ver never use the GCP before, so I toke this as a opportunity.&lt;br&gt;
After I finished the tutorial, my new-created GCP account had not been charged any cent yet.&lt;/p&gt;

&lt;p&gt;I think I'll just skip the detail steps and note the TIL. &lt;br&gt;
For those who interested it this tutorial could just click the link above.&lt;br&gt;
Here's the labs for the tutorial, and also the common steps to setup a kubernetes cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing the Client Tools&lt;/li&gt;
&lt;li&gt;Provisioning Compute Resources&lt;/li&gt;
&lt;li&gt;Provisioning the CA and Generating TLS Certificates&lt;/li&gt;
&lt;li&gt;Generating Kubernetes Configuration Files for Authentication&lt;/li&gt;
&lt;li&gt;Generating the Data Encryption Config and Key&lt;/li&gt;
&lt;li&gt;Bootstrapping the etcd Cluster&lt;/li&gt;
&lt;li&gt;Bootstrapping the Kubernetes Control Plane&lt;/li&gt;
&lt;li&gt;Bootstrapping the Kubernetes Worker Nodes&lt;/li&gt;
&lt;li&gt;Configuring kubectl for Remote Access&lt;/li&gt;
&lt;li&gt;Provisioning Pod Network Routes&lt;/li&gt;
&lt;li&gt;Deploying the DNS Cluster Add-on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TIL : GCP compute region/zone&lt;br&gt;
At the beginning of the tutorial, I have to prepare the GCP environment.&lt;br&gt;
And there are two commands to set up the region and zone for your GCP compute unit.&lt;br&gt;
A region can be seen as a data center in diffident era of the world.&lt;br&gt;
And the zones are isolated location within a region, kinda like different server in the same data center.&lt;br&gt;
So for the HA deployment, you may have multiple application running in different zone within certain region.&lt;/p&gt;

&lt;p&gt;And for those wonder if gcloud command have auto-completion.&lt;br&gt;
You can try &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/sdk/docs/interactive-gcloud"&gt;https://cloud.google.com/sdk/docs/interactive-gcloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=ezod_8QDT7Q"&gt;https://www.youtube.com/watch?v=ezod_8QDT7Q&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/46236580/how-to-enable-shell-command-completion-for-gcloud"&gt;https://stackoverflow.com/questions/46236580/how-to-enable-shell-command-completion-for-gcloud&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 3.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Thu, 19 Dec 2019 14:08:28 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-3-5c9h</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-3-5c9h</guid>
      <description>&lt;p&gt;TIL: The path to CKA Week 3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm
&lt;/h2&gt;

&lt;p&gt;The helm tool packages a Kubernetes application using a series of YAML files into a chart, or package. This allows for simple sharing between users, tuning using a templating scheme, as well as provenance tracking, among other things.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FsZP4PlW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FsZP4PlW.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Helm v2 consists of two components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A server call tiller, which runs inside the k8s cluster.&lt;/li&gt;
&lt;li&gt;A client call Helm, which runs on the local machine or any machine that is able to talk to tiller server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In helm version 2, uses Tiller pod to call k8s api to deploy the pods. The new Helm v3 does not deploy a tiller pod.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chart Contents
&lt;/h3&gt;

&lt;p&gt;A chart is an archived set of kubernetes resource manifests that make up a distributed application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Chart.yaml
├── README.md
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── pvc.yaml
│   ├── secrets.yaml
│   └── svc.yaml
└── values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Chart.yaml&lt;br&gt;
The Chart.yaml file contains some metadata about the Chart, like name version, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;values.yaml&lt;br&gt;
The values.yaml contains keys and values that used to generate the release in your cluster. These values are replaced in the resource manifests using the GO templationg syntax.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;templates&lt;br&gt;
The templates directory containes the resource manifests that make up the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Accessing the API&lt;br&gt;
To perform any action in a Kubernetes cluster, you need to access the API and go through three main steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication:&lt;/li&gt;
&lt;li&gt;Authorization (ABAC or RBAC):&lt;/li&gt;
&lt;li&gt;Admission Control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPhmqVCa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPhmqVCa.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a request reaches the API server securely, it will first go through any authentication module that has been configured. The request can be rejected if authentication fails or it gets authenticated and passed to the authorization step.&lt;/p&gt;

&lt;p&gt;At the authorization step, the request will be checked against existing policies. It will be authorized if the user has the permissions to perform the requested actions. Then, the requests will go through the last step of admission. In general, admission controllers will check the actual content of the objects being created and validate them before admitting the request.&lt;/p&gt;

&lt;p&gt;In addition to these steps, the requests reaching the API server over the network are encrypted using TLS. This needs to be properly configured using SSL certificates. If you use kubeadm, this configuration is done for you.&lt;/p&gt;
&lt;h2&gt;
  
  
  ABAC, BRAC and Webhook
&lt;/h2&gt;
&lt;h3&gt;
  
  
  ABAC
&lt;/h3&gt;

&lt;p&gt;ABAC stands for Attribute Based Access Control. It was the first authorization model in Kubernetes that allowed administrators to implement the right policies. Today, RBAC is becoming the default authorization mode.&lt;/p&gt;

&lt;p&gt;For example, the policy file shown below authorizes user Bob to read pods in the namespace foobar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apiVersion"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;abac.authorization.kubernetes.io/v1beta1"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kind"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Policy"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;spec"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bob"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;namespace"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;foobar"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;resource"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pods"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;readonly"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;true&lt;/span&gt;     
    &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RBAC
&lt;/h3&gt;

&lt;p&gt;RBAC stands for Role Based Access Control.&lt;/p&gt;

&lt;p&gt;All resources are modeled API objects in Kubernetes, from Pods to Namespaces. They also belong to API Groups, such as core and apps.These resources allow operations such as Create, Read, Update, and Delete (CRUD), which we have been working with so far. Operations are called verbs inside YAML files. Adding to these basic components, we will add more elements of the API, which can then be managed via RBAC.&lt;/p&gt;

&lt;p&gt;Rules are operations which can act upon an API group. Roles are a group of rules which affect, or scope, a single namespace, whereas ClusterRoles have a scope of the entire cluster.&lt;/p&gt;

&lt;p&gt;Each operation can act upon one of three subjects, which are User Accounts which don't exist as API objects, Service Accounts, and Groups which are known as clusterrolebinding when using kubectl.&lt;/p&gt;

&lt;p&gt;RBAC is then writing rules to allow or deny operations by users, roles or groups upon resources.&lt;/p&gt;

&lt;p&gt;While RBAC can be complex, the basic flow is to create a certificate for a user. As a user is not an API object of Kubernetes, we are requiring outside authentication, such as OpenSSL certificates. After generating the certificate against the cluster certificate authority, we can set that credential for the user using a context.&lt;/p&gt;

&lt;p&gt;Roles can then be used to configure an association of &lt;strong&gt;apiGroups&lt;/strong&gt;, &lt;strong&gt;resources&lt;/strong&gt;, and the &lt;strong&gt;verbs&lt;/strong&gt; allowed to them. The user can then be bound to a role limiting what and where they can work in the cluster.&lt;/p&gt;

&lt;p&gt;Here is a summary of the RBAC process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Determine or create namespace&lt;/li&gt;
&lt;li&gt;Create certificate credentials for user&lt;/li&gt;
&lt;li&gt;Set the credentials for the user to the namespace using a context&lt;/li&gt;
&lt;li&gt;Create a role for the expected task set&lt;/li&gt;
&lt;li&gt;Bind the user to the role&lt;/li&gt;
&lt;li&gt;Verify the user has limited access&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Finally, the last quiz at the end of this course.😂&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FbA1b4Uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FbA1b4Uk.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 2.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Mon, 16 Dec 2019 12:19:45 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-2-4p88</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-2-4p88</guid>
      <description>&lt;p&gt;Same as last week, keep working on the tutorial of Kubernetes Fundamentals.&lt;/p&gt;




&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;p&gt;Service in kubernetes is a very important object. They are agents which connect Pods together, or provide access outside of the cluster.&lt;br&gt;
The common types of service are &lt;code&gt;ClusterIp&lt;/code&gt;, &lt;code&gt;NodePort&lt;/code&gt; and &lt;code&gt;LoadBalancer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's the official explanation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterIp&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The ClusterIP service type is the default, and only provides access internally (except if manually creating an external endpoint). The range of ClusterIP used is defined via an API server startup option.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;NodePort&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The NodePort type is great for debugging, or when a static IP address is necessary, such as opening a particular address through a firewall. The NodePort range is defined in the cluster configuration.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;LoadBalancer&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The LoadBalancer service was created to pass requests to a cloud provider like GKE or AWS. Private cloud solutions also may implement this service type if there is a cloud provider plugin, such as with CloudStack and OpenStack. Even without a cloud provider, the address is made available to public traffic, and packets are spread among the Pods in the deployment automatically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And the last one &lt;strong&gt;ExternalName&lt;/strong&gt;, which is new to me. 💡&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A newer service is ExternalName, which is a bit different. It has no selectors, nor does it define ports or endpoints. It allows the return of an alias to an external service. The redirection happens at the DNS level, not via a proxy or forward. This object can be useful for services not yet brought into the Kubernetes cluster. A simple change of the type in the future would redirect traffic to the internal objects.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short, externalName can point to external DNS CNAME, not like the other service will point to a certain pod.&lt;/p&gt;


&lt;h2&gt;
  
  
  Volumes and Data
&lt;/h2&gt;

&lt;p&gt;There's one project call Rock mentioned in this section.&lt;br&gt;
Rock can be seem as a storage orchestrator provides a common framework across all of the storage solutions, like ceph, nfs, Cassandra, etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Scheduling
&lt;/h2&gt;

&lt;p&gt;Scheduling is the key of kubernetes. &lt;br&gt;
There's a lot of strategy that you can use when scheduling pods in kubernetes.&lt;br&gt;
One is call &lt;strong&gt;pod affinity&lt;/strong&gt;, and there are several kind of pod affinity rules.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt;&lt;br&gt;
means that the Pod will not be scheduled on a node unless the following operator is true. If the operator changes to become false in the future, the Pod will continue to run. This could be seen as a hard rule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;preferredDuringSchedulingIgnoredDuringExecution will choose a node with the desired setting before those without. If no properly-labeled nodes are available, the Pod will execute anyway.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;podAffinity / podAntiAffinity&lt;/strong&gt;&lt;br&gt;
Which will keep pods together or keep them on different nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;topologyKey&lt;/strong&gt;&lt;br&gt;
This rule is new to me.&lt;br&gt;
The topologyKey allows a general grouping of Pod deployments. Affinity (or the inverse anti-affinity) will try to run on nodes with the declared topology key and running Pods with a particular label. The topologyKey could be any legal key, with some important considerations. &lt;br&gt;
In short, we can use &lt;code&gt;topologyKey&lt;/code&gt; to scheduler our pod based on the topoloy in a rack or a region's view.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💡To check the scheduler event on the current k8s cluster, you can use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get events
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;






&lt;h2&gt;
  
  
  Logging and Troubleshooting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ephemeral Containers&lt;/strong&gt;&lt;br&gt;
Ephemeral container is a new alpha feature in v1.16. &lt;br&gt;
Allowing user can attach a container to a existing pod.&lt;br&gt;
You may be able to use the kubectl attach command to join an existing process within the container. This can be helpful instead of kubectl exec, which executes a new process. The functionality of the attached process depends entirely on what you are attaching to.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl debug buggypod --image debian --attach&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Start Sequence&lt;/strong&gt;&lt;br&gt;
If you built the cluster using &lt;strong&gt;kubeadm&lt;/strong&gt;, then the sequence will begins with systemd.&lt;br&gt;
kubelet will create any YAML inside the &lt;strong&gt;staticPodPath&lt;/strong&gt;.&lt;br&gt;
So this is also where that api-server, kube-scheduler, kube-controller YAMLs stored.&lt;br&gt;
In most case your staticPodPath will be &lt;code&gt;/etc/kubernetes/manifests&lt;/code&gt;&lt;br&gt;
So you can see your api-server, kube-scheduler or other default service's YAMLs here.&lt;/p&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 1.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Mon, 09 Dec 2019 13:33:42 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-1-2hel</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-1-2hel</guid>
      <description>&lt;p&gt;This week, I started the course of k8s Fundamentals that I bought with CKA bundle.&lt;/p&gt;

&lt;p&gt;The course starts from the history and the basic of container orchestrator, which I'm quite familiar with. So I didn't spend too much time on it.&lt;/p&gt;

&lt;p&gt;And follows by the Kubernetes Architecture section. Telling what components that run on k8s cluster and how they work.&lt;br&gt;
For me, this is just a quick review, nothing new.&lt;br&gt;
So I just went through this section and took 100% on knowledge check of this section. &lt;/p&gt;

&lt;p&gt;To do the lab in the course I have to prepare a k8s environment, so I started 2 VMs by VirtualBox on my PC.&lt;br&gt;
There is a Lab during the course that teach you how to install a k8s by kubeadm.&lt;br&gt;
&lt;strong&gt;If you plan to setup a experience environment by VirtualBox, remember to turn off the swap.&lt;/strong&gt;&lt;br&gt;
Just a little complain, the document of this CNCF still have room for improvement. Noting bash and YAML codes by pdf format is not a decent way.&lt;/p&gt;

&lt;p&gt;After I finished k8s installation, I moved on to the &lt;strong&gt;API and Access&lt;/strong&gt; section.&lt;br&gt;
And there is a TIL here 😀.&lt;br&gt;
You can use --v to set the verbose level for kubectl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --v=6 get nodes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can even see how kubectl send the curl to the api-server.&lt;/p&gt;

&lt;p&gt;Also, I notice that there is a namespace called &lt;code&gt;kube-node-lease&lt;/code&gt; created by default.&lt;br&gt;
And &lt;code&gt;kube-node-lease&lt;/code&gt; is to keep the worker node's lease information.&lt;br&gt;
Here's the introduction of lease from official document&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In versions of Kubernetes prior to 1.13, NodeStatus is the heartbeat from the node. Node lease feature is enabled by default since 1.14 as a beta feature (feature gate NodeLease, KEP-0009). When node lease feature is enabled, each node has an associated Lease object in kube-node-lease namespace that is renewed by the node periodically, and both NodeStatus and node lease are treated as heartbeats from the node. Node leases are renewed frequently while NodeStatus is reported from node to master only when there is some change or enough time has passed (default is 1 minute, which is longer than the default timeout of 40 seconds for unreachable nodes). Since node lease is much more lightweight than NodeStatus, this feature makes node heartbeat significantly cheaper from both scalability and performance perspectives.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  API Objects
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;You can turn on and off the scheduling to a node with the kubectl cordon/uncordon commands.&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cordon -h
Mark node as unschedulable.

Examples:
  # Mark node "foo" as unschedulable.
  kubectl cordon foo

Options:
      --dry-run=false: If true, only print the object that would be sent, without sending it.
  -l, --selector='': Selector (label query) to filter on

Usage:
  kubectl cordon NODE [options]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I've been using taint to turn on/off scheduling a node for a long time. &lt;br&gt;
This is quite handy. Good to know this.&lt;/p&gt;

&lt;p&gt;So far, I've finished 7 section out of 17 from CNCF k8s Fundamentals.&lt;/p&gt;

&lt;p&gt;I must say CNCF training system have lots of room for improvement.&lt;br&gt;
Some of the quiz just broken, some of them with incorrect answer which will misleading the new learner. &lt;br&gt;
Some of them even have no answer, the system just kept telling me I was wrong even I tried the whole options. I've reported 2 issues in the class forum.&lt;/p&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>TIL: The path to CKA Week 0.</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Sat, 07 Dec 2019 06:46:54 +0000</pubDate>
      <link>https://dev.to/mjace/til-the-path-to-cka-week-0-10ne</link>
      <guid>https://dev.to/mjace/til-the-path-to-cka-week-0-10ne</guid>
      <description>&lt;p&gt;I've been using Kubernetes and working with kubernetes related project for about 2 yesrs.&lt;br&gt;
So, it's time to set a goal to test myself for the understanding of k8s; and CKA is a very iconic certification for the kubernetes admins.&lt;/p&gt;

&lt;p&gt;So I setup a goal for myself to pass CKA as soon as possible, since I would like to take other certification exam in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's start with my background.&lt;/strong&gt;&lt;br&gt;
I've been using container related technique for about 3~4 years.&lt;br&gt;
And my current work is about build up NFVI based on the kubernetes platform.&lt;br&gt;
In short. I'm familiar with kubernetes deployment/management and able to using several kinds of plugin like multus, sriov-cni, cmk, nfd, etc.&lt;/p&gt;

&lt;p&gt;But, it been about 2 years since the last time I built up k8s all by manually with k8s v1.9. So it's better for me to prepare CKA from the start.&lt;/p&gt;

&lt;p&gt;During the Black Friday and the Cyber Monday.&lt;br&gt;
I bought the &lt;strong&gt;CKA + k8s Fundamentals&lt;/strong&gt; and &lt;strong&gt;Certified Kubernetes Administrator (CKA) with Practice Tests&lt;/strong&gt; on udemy.&lt;/p&gt;

&lt;p&gt;And I plan to start from k8s Fundamentals course, then the Certified Kubernetes Administrator (CKA) with Practice Tests.&lt;br&gt;
Also, I will go through the k8s hard way.&lt;br&gt;
&lt;a href="https://github.com/kelseyhightower/kubernetes-the-hard-way"&gt;https://github.com/kelseyhightower/kubernetes-the-hard-way&lt;/a&gt;&lt;br&gt;
Finally, exercise the tasks part in k8s.io document.&lt;br&gt;
Last but not least, scheduling for CKA exam.&lt;/p&gt;

&lt;p&gt;So, let's go. Wish me luck. 🤘&lt;/p&gt;

</description>
      <category>cka</category>
      <category>kubernetes</category>
      <category>preparecka</category>
    </item>
    <item>
      <title>Horizontal Pod Autoscale with Custom Prometheus Metrics🚀</title>
      <dc:creator>Jace</dc:creator>
      <pubDate>Fri, 01 Nov 2019 09:46:50 +0000</pubDate>
      <link>https://dev.to/mjace/horizontal-pod-autoscale-with-custom-prometheus-metrics-5gem</link>
      <guid>https://dev.to/mjace/horizontal-pod-autoscale-with-custom-prometheus-metrics-5gem</guid>
      <description>&lt;h2&gt;
  
  
  0. 前言
&lt;/h2&gt;

&lt;p&gt;這篇文章會分享如何在Kubernete上透過自訂的Prometheus metrics來實作HPA &lt;br&gt;
-- Horizontal Pod Autoscale&lt;/p&gt;

&lt;p&gt;首先來講講為什麼會需要custom metrics&lt;br&gt;
&lt;strong&gt;很簡單，就是內建的不夠用&lt;/strong&gt;&lt;br&gt;
其實單純用內建的CPU/Ram作為Scale的指標其實是非常不準確的一件事情。&lt;/p&gt;

&lt;p&gt;可參考&lt;br&gt;
&lt;a href="http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html"&gt;Cpu utilization is wrong&lt;/a&gt;&lt;br&gt;
&lt;a href="http://www.hpts.ws/papers/2007/Cockcroft_HPTS-Useless.pdf"&gt;Useless metrics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;所以針對一些原生K8s沒有提供的指標，像是使用服務的人數，連線的latency等等。&lt;br&gt;
就必須使用Custom Metrics來執行Scaling.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;下列操作基於於kubernetes v1.14&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1. HPA via Custom Metrics
&lt;/h2&gt;

&lt;p&gt;免不了的，一定得解釋一下HPA透過Custom Metrics做Scaling的機制&lt;br&gt;
單純透過CPU or Ram其實沒甚麼好講的，官方文件跟網路上很多文章講解的都很清楚&lt;br&gt;
&lt;del&gt;其實是我懶得寫&lt;/del&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;k8s.doc&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ithelp.ithome.com.tw/articles/10197046"&gt;ithome&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;重點在於HPA是如何取得Custom Metrics的，以及如何將我們自訂的Metric吐出來給HPA Controller&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;大致運作邏輯是這樣&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fe8gAY6x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/UvumfGO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fe8gAY6x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/UvumfGO.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;首先講解Custom Metrics的部分，在Kubernetes中，目前有關Custom Metrics的操作主要是對&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;custom-metrics.metrics.k8s.io/v1beta1 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;這個API EndPoint來註冊以及取得資料的。&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;也就是說，我們的Pod暴露出的Custom Metrics要想辦法註冊到這個API Endpoint上&lt;br&gt;
HPA Controller就可以從這個EndPoint取得我們自定的Metrics，來做他該做的事情&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Prometheus Operator &amp;amp; Prometheus-adapter
&lt;/h2&gt;

&lt;p&gt;先從架設環境開始，在這邊我會透過Prometheus來集中接收Metrics&lt;br&gt;
應用服務對Prometheus暴露Metrics&lt;br&gt;
而會有個&lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter"&gt;Prometheus-adapter&lt;/a&gt;把metrics註冊到上述的Endpoint&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;其實也可以讓應用直接對K8s的Custom API Server直接註冊Custom Metrics&lt;br&gt;
但統一將Metrics 交給Prometheus後，可以更方便我們整合Grafana Dashboard&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  2.1 安裝Prometheus Operator
&lt;/h3&gt;

&lt;p&gt;Prometheus Operator是由coreOS開發打包推出的&lt;br&gt;
以下是官方的簡介&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;和一般Prometheus最大的不同，Prometheus Operator會順帶安裝Grafana&lt;br&gt;
並且還多加入了一個Service Monitor&lt;/p&gt;

&lt;p&gt;Service Monitor會負責將Pod暴露出來的metric導入到Prometheus內&lt;br&gt;
有點類似prometheus的&lt;a href="https://codeblog.dotsandbrackets.com/scraping-application-metrics-prometheus/"&gt;scrape&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;我們可以直接用helm安裝prometheus operator&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; po stable/prometheus-operator
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;安裝好後可以連進prometheus的網頁查看所有的Target是否都能正確取得&lt;br&gt;
如果Kube-proxy有異常的話可參考&lt;a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy"&gt;這邊&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2.2 安裝Prometheus-adapter
&lt;/h3&gt;

&lt;p&gt;安裝完Pormetheus Operator後我們打通了Pod export metrics到Prometheus的管道&lt;br&gt;
但還必須將Prometheus內的metrics註冊到k8s的custom metrics API endpoint&lt;/p&gt;

&lt;p&gt;同樣的可以透過helm來安裝prometheus-adatper&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; pa stable/prometheus-adapter
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;要額外注意的一點是&lt;code&gt;prometheus.url&lt;/code&gt;和&lt;code&gt;prometheus.port&lt;/code&gt;必須配合我們剛剛安裝好的prometheus operator.&lt;br&gt;
可以在安裝的時候指定這些參數&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; pa &lt;span class="nt"&gt;--set&lt;/span&gt; prometheus.url&lt;span class="o"&gt;=&lt;/span&gt;http://po-prometheus-operator-prometheus.default.svc,prometheus.port&lt;span class="o"&gt;=&lt;/span&gt;9090 stable/prometheus-adapter
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;安裝好後可以透過這個指令來從custom metrics api endpoint撈資訊&lt;br&gt;
有噴出一堆東西的話就是安裝成功了&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt; /apis/custom.metrics.k8s.io/v1beta1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;如果有回傳，但沒看到metrics。表示你的Prometheus Operator有安裝成功，&lt;br&gt;
正確註冊了Custom metrics api endpoint，但Adapter沒有從Prometheus撈到資料註冊回去。&lt;/p&gt;



&lt;p&gt;至此為止，加上Prometheus Operator以及Prometheus adapter後&lt;br&gt;
原先的的關係圖會變成這樣子&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lXNiUbJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/DLJNj3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXNiUbJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/DLJNj3y.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;接下來會介紹如何將metrics expose到Prometheus內&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Pod exposes custom metrics.
&lt;/h2&gt;

&lt;p&gt;Pod暴露metrics的方法大致可分為幾種&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;透過lib自行輸出metrics
例如 &lt;a href="https://github.com/pilosus/flask_prometheus_metrics/"&gt;flask_prometheus_metrics&lt;/a&gt;
就是會幫你輸出flask裡面的相關指標&lt;/li&gt;
&lt;li&gt;自己按照Prometheus所需的格式輸出metrics
例如我們接下來用的範例就是如此&lt;/li&gt;
&lt;li&gt;由Ingress或Service Mesh幫你暴露metrics &lt;em&gt;(可自訂性較低)&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  3.1 Start a sample Pod.
&lt;/h3&gt;

&lt;p&gt;Create &lt;strong&gt;sample-deploy.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;luxas/autoscale-demo:v0.1.2&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metrics-provider&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
          &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; sample-deploy.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create &lt;strong&gt;svc.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
    &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;po&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; svc.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;等pod建立好後，可以透過下列指令來觀察pod的metrics&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get service sample-svc &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{ .spec.clusterIP }'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/metrics

Output &lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;# HELP http_requests_total The amount of requests served by the server in total&lt;/span&gt;
&lt;span class="c"&gt;# TYPE http_requests_total counter&lt;/span&gt;
http_requests_total 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  3.2 ServiceMonitor
&lt;/h3&gt;

&lt;p&gt;如先前介紹的，Service Monitor主要負責將Pod吐出的資料倒到Prometheus內&lt;br&gt;
類似Scrape的方式&lt;/p&gt;

&lt;p&gt;Create &lt;strong&gt;service-monitor.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monitoring.coreos.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceMonitor&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
    &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;po&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
  &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
      &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;po&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service-monitor.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;到這一步需要開啟Prometheus的Dashboard檢查Target內有沒有我們定義的Target&lt;br&gt;
這裡附上ServiceMonitor的&lt;a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/troubleshooting.md#overview-of-servicemonitor-tagging-and-related-elements"&gt;除錯方法&lt;/a&gt;&lt;br&gt;
其實不外乎就是label沒對上罷了&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Double Check
&lt;/h2&gt;

&lt;p&gt;做到這邊，我們整個路算是都打通了。&lt;br&gt;
可以從apiserver撈一下，檢查我們的metric有沒有註冊上去&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt; &lt;span class="s2"&gt;"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;註: 我們安裝prometheus adapter時只用了預設的config，如果你的metric沒有出現。&lt;br&gt;
或是希望在註冊回k8s時要做一些運算，如算rate之類的。&lt;br&gt;
可以參考&lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter#why-isnt-my-metric-showing-up"&gt;這邊&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  5. Create HPA
&lt;/h2&gt;

&lt;p&gt;搞了這麼久終於可以來定義我們的HPA了。&lt;/p&gt;

&lt;p&gt;Create and apply &lt;strong&gt;sample-hpa.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HorizontalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaling/v2beta1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scaleTargetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# point the HPA at the sample application&lt;/span&gt;
    &lt;span class="c1"&gt;# you created above&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sample-app&lt;/span&gt;
  &lt;span class="c1"&gt;# autoscale between 1 and 10 replicas&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# use a "Pods" metric, which takes the average of the&lt;/span&gt;
  &lt;span class="c1"&gt;# given metric across all pods controlled by the autoscaling target&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pods&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# use the metric that you used above: pods/http_requests&lt;/span&gt;
      &lt;span class="na"&gt;metricName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http_requests&lt;/span&gt;
      &lt;span class="c1"&gt;# target 500 milli-requests per second,&lt;/span&gt;
      &lt;span class="c1"&gt;# which is 1 request every two seconds&lt;/span&gt;
      &lt;span class="na"&gt;targetAverageValue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And try to get our hpa&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
sample-app   Deployment/sample-app   0/500m    1         10        1          24h

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Trigger Scale Up.
&lt;/h2&gt;

&lt;p&gt;先開兩個視窗分別觀察HPA以及deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;watch kubectl get hpa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;watch kubectl get deploy sample-app
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;接著再打一個curl的無窮迴圈&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do  &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get service sample-svc &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{ .spec.clusterIP }'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;沒意外的話就會看到HPA被觸發然後進行Scale Up了&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Yo-OyW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/8dBDopW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Yo-OyW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/8dBDopW.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Misc.
&lt;/h2&gt;

&lt;p&gt;最先是打算做http request latency based HPA的，所以&lt;br&gt;
拿了&lt;a href="https://github.com/pilosus/flask_prometheus_metrics/"&gt;flask_prometheus_metrics&lt;/a&gt;來做範例&lt;br&gt;
其實最後也算是有弄出來，但中間會牽扯到Prometheus adapter的metricsQuery語法撰寫&lt;br&gt;
礙於篇幅的關係就不寫了。&lt;br&gt;
而且發現Prometheus adapter的metricsQuery其實還是有侷限性。&lt;br&gt;
他只能針對一個metric來做rate或是histogram，然後再註冊到K8s custom api endpoint&lt;br&gt;
不像我們用grafana時，可以拿很多metric組合運算得出我們想要的數據&lt;/p&gt;

&lt;p&gt;所以就直接用這個例子分享。&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;結論，最好的方法還是Pod自己就能吐出最終你要註冊到k8s的metric&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  TroubleShoot
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Prometheus Operator重裝失敗&lt;br&gt;
再重裝Prometheus Operator時可能會遇到CRD相關的錯誤&lt;br&gt;
先用Helm delete --purge的方式清空&lt;br&gt;
再刪除所有的crd&lt;br&gt;
&lt;a href="https://github.com/coreos/prometheus-operator#removal"&gt;https://github.com/coreos/prometheus-operator#removal&lt;/a&gt;&lt;br&gt;
然後重新安裝時先手動建立CRD&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/alertmanager.crd.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheus.crd.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheusrule.crd.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/servicemonitor.crd.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/podmonitor.crd.yaml
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;接著helm install時不要產生CRD&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; po stable/prometheus-operator &lt;span class="nt"&gt;--set&lt;/span&gt; prometheusOperator.createCustomResource&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;&lt;p&gt;我的Prometheus Dashboard內kube-proxy target錯誤 &lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy"&gt;&lt;/a&gt;&lt;a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy"&gt;https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md"&gt;https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/coreos/prometheus-operator"&gt;https://github.com/coreos/prometheus-operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DirectXMan12/k8s-prometheus-adapter"&gt;https://github.com/DirectXMan12/k8s-prometheus-adapter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator"&gt;https://github.com/helm/charts/tree/master/stable/prometheus-operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=gSiGFH4ZnS8"&gt;https://www.youtube.com/watch?v=gSiGFH4ZnS8&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>prometheus</category>
      <category>autoscale</category>
    </item>
  </channel>
</rss>
