<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MyCareersFuture</title>
    <description>The latest articles on DEV Community by MyCareersFuture (@mcf).</description>
    <link>https://dev.to/mcf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mcf"/>
    <language>en</language>
    <item>
      <title>Setting up ZSH on Android</title>
      <dc:creator>Kai Hong</dc:creator>
      <pubDate>Sun, 20 Dec 2020 09:50:59 +0000</pubDate>
      <link>https://dev.to/mcf/setting-up-zsh-on-android-d2l</link>
      <guid>https://dev.to/mcf/setting-up-zsh-on-android-d2l</guid>
      <description>&lt;p&gt;One day I was out and about seizing the day, when I suddenly saw on the news that a critical zero-day bug has been unleashed and I urgently needed to patch my servers, but I don't have my laptop with me! &lt;/p&gt;

&lt;p&gt;Has that ever happened to you? &lt;em&gt;Well me neither&lt;/em&gt;, but &lt;strong&gt;in case&lt;/strong&gt; you ever did, here's how you can setup a proper ZSH terminal on your Android device!&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;1 x Android device&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1 x Internet Connection&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;2 x 1 min&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Steps 🚶
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1&lt;/span&gt;
pkg &lt;span class="nb"&gt;install &lt;/span&gt;zsh

&lt;span class="c"&gt;# Step 2 (https://ohmyz.sh/#install)&lt;/span&gt;
sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yeap that's all you need to get ZSH on your android device. For any other packages, you can use &lt;code&gt;pkg&lt;/code&gt; or &lt;code&gt;apt-get&lt;/code&gt;. Termux provides a slew of useful utilities by default, and you can expand upon that with Termux APIs; allowing you to do things like retrieving SMS, getting location, etc. You could use SMS as an out-of-band way of triggering a smart home action for example.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that&lt;/em&gt; in order to use Termux APIs, you first need to install from the Play Store, then run &lt;code&gt;pkg install termux-api&lt;/code&gt;. Then make sure that the Termux API application has enough permissions to do what you want.&lt;/p&gt;

&lt;p&gt;But wait, you don't like the limited Linux functionality that Termux provides you? &lt;a href="https://github.com/termux/proot-distro" rel="noopener noreferrer"&gt;How about installing Ubuntu on your phone?&lt;/a&gt;. Someone created &lt;a href="https://wiki.termux.com/wiki/PRoot" rel="noopener noreferrer"&gt;PRoot&lt;/a&gt;, which is a user-space implementation of &lt;code&gt;chroot&lt;/code&gt;, which is what makes running Ubuntu on Android pretty easy and straightfoward (unlike the janky dual-boot script days).&lt;/p&gt;

&lt;h2&gt;
  
  
  My setup 🤓
&lt;/h2&gt;

&lt;p&gt;Having plain old vanilla ZSH is fine and all, but you can spice your life up with some useful plugins. To make installation easier, I rely on a plugin manager called &lt;a href="https://getantibody.github.io/" rel="noopener noreferrer"&gt;Antibody&lt;/a&gt;. To get started, follow the instructions on their site cause it's really well written.&lt;/p&gt;

&lt;p&gt;Using the static loading method (because it's faster), this is my list of plugins in my &lt;code&gt;.zsh_plugins.txt&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zsh-users/zsh-completions
zsh-users/zsh-syntax-highlighting
zsh-users/zsh-history-substring-search
zsh-users/zsh-autosuggestions
mafredri/zsh-async
sindresorhus/pure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I think the plugins are pretty self-explanatory except for the last two, which is the theme that I use because it's minimalistic.&lt;/p&gt;

&lt;p&gt;Two other plugins I use are&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/wting/autojump" rel="noopener noreferrer"&gt;autojump&lt;/a&gt;: quickly switch between directories based on history (had to manually install)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mroth/scmpuff" rel="noopener noreferrer"&gt;scmpuff&lt;/a&gt;: numbered git files and nice alises. I had to compile this, which forced me to install golang (it works!)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl36acbe2o2f44cm6yuvm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fl36acbe2o2f44cm6yuvm.jpg" alt="Termux ZSH screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My use case 🦄
&lt;/h2&gt;

&lt;p&gt;Why go through all this effort to have a nice shell experience on Android? All of these was done to set me up for having a &lt;a href="https://blog.lordofgeeks.com/2020/12/productive-2-weeks-in-reservist/" rel="noopener noreferrer"&gt;Productive 2 weeks in reservist&lt;/a&gt;. I wanted to try if it is possible to develop on an Android Tablet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Spoiler: it worked bloody well&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Article on developing on a tablet will be coming soon where I also share how it was also useful for making quick changes on my  &lt;a href="https://lordofgeeks.com/" rel="noopener noreferrer"&gt;portfolio&lt;/a&gt; site, as well as miscellaneous SSH tasks on my VPS.&lt;/p&gt;

&lt;p&gt;In summary, it's &lt;strong&gt;really easy&lt;/strong&gt; to setup a decent shell experience on your Android devices these days (hint: iOS too 😏). Have fun!&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>android</category>
      <category>zsh</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting started with grafana and prometheus for kubernetes metrics</title>
      <dc:creator>Ryan</dc:creator>
      <pubDate>Thu, 03 Sep 2020 07:56:59 +0000</pubDate>
      <link>https://dev.to/mcf/getting-started-with-grafana-and-prometheus-for-metric-monitoring-3e3j</link>
      <guid>https://dev.to/mcf/getting-started-with-grafana-and-prometheus-for-metric-monitoring-3e3j</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/mcf/getting-started-in-deploying-grafana-and-prometheus-2ac3"&gt;previous post&lt;/a&gt;, we've gotten Grafana up and running with a cloudwatch datasource. While it provides us with many insights on AWS resources, it doesn't tell us how our applications are doing in our Kubernetes cluster. Knowing the resources our applications consume can help prevent disasters, such as when applications consume all the RAM on the node, causing it to no longer function, and we now have dead nodes and applications.&lt;/p&gt;

&lt;p&gt;For us to view the metrics on our Grafana dashboard, we can integrate it into a Prometheus datasource, and have Prometheus collect metrics from our nodes and applications. We will deploy Prometheus using helm, and explain more along the way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;
The quick 5-minute install

&lt;ul&gt;
&lt;li&gt;What you get&lt;/li&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Storage space&lt;/li&gt;
&lt;li&gt;Installation&lt;/li&gt;
&lt;li&gt;Connecting Grafana to Prometheus&lt;/li&gt;
&lt;li&gt;Adding Dashboards&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

How does it work?

&lt;ul&gt;
&lt;li&gt;Node Metrics&lt;/li&gt;
&lt;li&gt;Application Metrics&lt;/li&gt;
&lt;li&gt;So how is this configured?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Wrapping up&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster, preferably AWS EKS&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The quick 5-minute install
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What you get
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus Server&lt;/li&gt;
&lt;li&gt;Prometheus Node Exporter&lt;/li&gt;
&lt;li&gt;Prometheus Alert Manager&lt;/li&gt;
&lt;li&gt;Prometheus Push Gateway&lt;/li&gt;
&lt;li&gt;Kube State Metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Storage space
&lt;/h3&gt;

&lt;p&gt;Before we begin, it is worth mentioning the file storage requirements of Prometheus. Prometheus server will be running with a persistent volume(PV) attached, and this volume will be used by the time-series database to store the various metrics it collects in the &lt;code&gt;/data&lt;/code&gt; folder. Note that we have set our PV to 100Gi in the following line of &lt;code&gt;values.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;https://github.com/ryanoolala/recipes/blob/8a732de67f309a58a45dec2d29218dfb01383f9b/metrics/prometheus/5min/k8s/values.yaml#L765&lt;/span&gt;
&lt;span class="c1"&gt;## Prometheus server data Persistent Volume size&lt;/span&gt;
    &lt;span class="c1"&gt;##&lt;/span&gt;
    &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a 100Gib EBS volume attached to our prometheus-server. So how big of a disk do we need to provide? There are a few factors involved so generally, it will be difficult to calculate the right size for the current cluster without first knowing how many applications are hosted, and also to account for growth in the number of nodes/applications. &lt;/p&gt;

&lt;p&gt;Prometheus also has a default data retention period of 15 days, this is to prevent the amount of data from growing indefinitely and can help us keep the data size in check, as it will delete metrics data older than 15 days. &lt;/p&gt;

&lt;p&gt;In Prometheus docs, they suggest calculating using this formula, with 1-2 &lt;code&gt;bytes_per_sample&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects
needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is difficult for me to calculate even with an existing setup, so if this is your first time setting up I can imagine it being even more so. So as a guide, I'll share with you my current setup and disk usage, so you can gauge how much of disk space you want to provision. &lt;/p&gt;

&lt;p&gt;In my cluster, I am running &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;20 EC2 nodes&lt;/li&gt;
&lt;li&gt;~700 pods&lt;/li&gt;
&lt;li&gt;Default scrape intervals&lt;/li&gt;
&lt;li&gt;15 day retention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My current disk usage is ~70G. &lt;/p&gt;

&lt;p&gt;If the price of 100GiB of storage is acceptable for you, in my region it is about USD12/month. I think it is a good starting point and you can save the time and effort on calculating for storage provisioning and just start with this.&lt;/p&gt;

&lt;p&gt;Note that I'm running Prometheus 2.x, that has an improved storage layer over Prometheus 1, and has shown to have reduced the storage usage and thus the lower need of disk space, see &lt;a href="https://coreos.com/blog/prometheus-2.0-storage-layer-optimization" rel="noopener noreferrer"&gt;blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this out of the way, let us get our Prometheus application started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus stable/prometheus &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/ryanoolala/recipes/blob/master/metrics/prometheus/5min/k8s/values.yaml &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that all the prometheus pods are running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pod &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
NAME                                             READY   STATUS    RESTARTS   AGE
prometheus-alertmanager-78b5c64fd5-ch7hb         2/2     Running   0          67m
prometheus-kube-state-metrics-685dccc6d8-h88dv   1/1     Running   0          67m
prometheus-node-exporter-8xw2r                   1/1     Running   0          67m
prometheus-node-exporter-l5pck                   1/1     Running   0          67m
prometheus-pushgateway-567987c9fd-5mbdn          1/1     Running   0          67m
prometheus-server-7cd7d486cb-c24lm               2/2     Running   0          67m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connecting Grafana to Prometheus
&lt;/h3&gt;

&lt;p&gt;To access Grafana UI, run &lt;code&gt;kubectl port-forward svc/grafana -n grafana 8080:80&lt;/code&gt;, go to &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt; and log in with the admin user, if you need the credentials, see the &lt;a href="https://dev.to/mcf/getting-started-in-deploying-grafana-and-prometheus-2ac3#logging-in"&gt;previous post&lt;/a&gt; on instructions.&lt;/p&gt;

&lt;p&gt;Go to the datasource section under the settings wheel&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7vl77deamon6jua7sw7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7vl77deamon6jua7sw7g.png" alt="Settings - Add data source"&gt;&lt;/a&gt;&lt;br&gt;
and click "Add data source"&lt;/p&gt;

&lt;p&gt;If you've followed my steps, your Prometheus setup will create a service named &lt;code&gt;prometheus-server&lt;/code&gt; in the prometheus namespace. Since Grafana and Prometheus are hosted in the same cluster, we can simply use the assigned internal A record to let Grafana discover prometheus. &lt;/p&gt;

&lt;p&gt;Under the &lt;code&gt;URL&lt;/code&gt; textbox, enter &lt;code&gt;http://prometheus-server.prometheus.svc.cluster.local:80&lt;/code&gt;. This is the DNS A record of prometheus that will be resolvable for any pod in the cluster, including our Grafana pod. Your setting should look like this.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmvqp9zgbd9vdxd0kl2u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmvqp9zgbd9vdxd0kl2u5.png" alt="Adding datasource"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Save &amp;amp; Test" and Grafana will tell you that the data source is working.&lt;/p&gt;
&lt;h3&gt;
  
  
  Adding Dashboards
&lt;/h3&gt;

&lt;p&gt;Now that Prometheus is setup, and has started to collect metrics, we can start visualizing the data. Here are a few dashboards to get you started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafana.com/grafana/dashboards/315" rel="noopener noreferrer"&gt;https://grafana.com/grafana/dashboards/315&lt;/a&gt;&lt;br&gt;
&lt;a href="https://grafana.com/grafana/dashboards/1860" rel="noopener noreferrer"&gt;https://grafana.com/grafana/dashboards/1860&lt;/a&gt;&lt;br&gt;
&lt;a href="https://grafana.com/grafana/dashboards/11530" rel="noopener noreferrer"&gt;https://grafana.com/grafana/dashboards/11530&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mouse over the "+" icon and select "Import", paste the dashboard ID into the textbox and click "Load"&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn35y7hmg6q5lhr2sxbsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn35y7hmg6q5lhr2sxbsv.png" alt="Importing 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select our Prometheus datasource we added in the previous step into the drop-down selection&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvofwcwzc4ueqjqm29cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvofwcwzc4ueqjqm29cc.png" alt="Importing 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will have a dashboard that looks like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxzd2shlwmowu9kk8gbxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxzd2shlwmowu9kk8gbxr.png" alt="Final dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may have noticed the "N/A" in a few of the dashboard panels, this is a common problem in various dashboards, due to incompatible versions of Prometheus/Kubernetes with changes in metric labels, etc. &lt;br&gt;
We will have to edit the panel and debug the queries to fix them. If there are too many errors, I will suggest finding another dashboard until you find one that works and fits your needs.&lt;/p&gt;
&lt;h1&gt;
  
  
  How does it work?
&lt;/h1&gt;

&lt;p&gt;You may have wondered how all these metrics are available, even though you've simply deployed it, without configuring anything other than disk space. To understand the architecture of Prometheus, check out their &lt;a href="https://prometheus.io/docs/introduction/overview/#architecture" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. I've attached an architecture diagram from the docs here for reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftvvsmay3coepmueigmi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftvvsmay3coepmueigmi8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will keep it simple by only focusing on how we are retrieving node and application(running pods) metrics. Remember at the start of the article, we noted down the various Prometheus applications you will get for following this guide.&lt;/p&gt;

&lt;p&gt;Prometheus-server pod will be pulling metrics via Http endpoints, most typically the &lt;code&gt;/metrics&lt;/code&gt; endpoint from various sources. &lt;/p&gt;
&lt;h4&gt;
  
  
  Node Metrics
&lt;/h4&gt;

&lt;p&gt;When we installed prometheus, there is a prometheus-node-exporter daemonset that is created. This ensures that every node in the cluster will have one pod of node-exporter, which is responsible for retrieving node metrics and exposing them to its &lt;code&gt;/metrics&lt;/code&gt; endpoint.&lt;/p&gt;
&lt;h4&gt;
  
  
  Application Metrics
&lt;/h4&gt;

&lt;p&gt;prometheus-server will discover services through the Kubernetes API, to find &lt;a href="https://github.com/helm/charts/tree/master/stable/prometheus#scraping-pod-metrics-via-annotations" rel="noopener noreferrer"&gt;pods with specific annotations&lt;/a&gt;. As part of the configuration of the application deployments, you will usually see the following annotations in various other applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metadata:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: "4000"
    prometheus.io/scrape: "true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are what prometheus-server will look out for, to scrape for metrics from the pods.&lt;/p&gt;

&lt;h4&gt;
  
  
  So how is this configured?
&lt;/h4&gt;

&lt;p&gt;Prometheus will load its scraping configuration from a file called &lt;code&gt;prometheus.yml&lt;/code&gt; which is a configmap mounted into prometheus-server pod. During our installation using the helm chart, this file is configurable inside the &lt;code&gt;values.yaml&lt;/code&gt;, see the source code at &lt;a href="https://github.com/ryanoolala/recipes/blob/8a732de67f309a58a45dec2d29218dfb01383f9b/metrics/prometheus/5min/k8s/values.yaml#L1167" rel="noopener noreferrer"&gt;values.yaml#L1167&lt;/a&gt;. The scrape targets are configured in various jobs and you will see several jobs configured by default, each catering to a specific configuration of how and when to scrape.&lt;/p&gt;

&lt;p&gt;An example for our application metrics is found at&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#https://github.com/ryanoolala/recipes/blob/8a732de67f309a58a45dec2d29218dfb01383f9b/metrics/prometheus/5min/k8s/values.yaml#L1444&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;kubernetes-pods'&lt;/span&gt;

        &lt;span class="na"&gt;kubernetes_sd_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pod&lt;/span&gt;

        &lt;span class="na"&gt;relabel_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__meta_kubernetes_pod_annotation_prometheus_io_scrape&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keep&lt;/span&gt;
            &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__meta_kubernetes_pod_annotation_prometheus_io_path&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
            &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__metrics_path__&lt;/span&gt;
            &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(.+)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__address__&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;__meta_kubernetes_pod_annotation_prometheus_io_port&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
            &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;([^:]+)(?::\d+)?;(\d+)&lt;/span&gt;
            &lt;span class="na"&gt;replacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$1:$2&lt;/span&gt;
            &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__address__&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;labelmap&lt;/span&gt;
            &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__meta_kubernetes_pod_label_(.+)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__meta_kubernetes_namespace&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
            &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes_namespace&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__meta_kubernetes_pod_name&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
            &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes_pod_name&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what configures prometheus-server to scrape pods with the annotations we talked about earlier.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;With this, you will have a functioning metric collector and dashboards to help kick start your observability journey in metrics. The Prometheus we've set up in this guide will be able to provide a healthy set up in most systems. &lt;/p&gt;

&lt;p&gt;However, there is one limitation again to take note of and that is this Prometheus is not set up for High Availability(HA)&lt;/p&gt;

&lt;p&gt;As this uses an Elastic Block Store(EBS) volume, as we have explained in the previous post, it will not allow us to scale-out the prometheus-server to provided better service uptime, if the prometheus pod restarts, possibly to due Out-of-memory(OOMKilled) or unhealthy nodes, and if you have alerts set up using metrics, this can be an annoying problem as you will lose your metrics for the time being, and blind to the current situation.&lt;/p&gt;

&lt;p&gt;The solution to this problem is something I have yet to deploy myself, and when I do, I will write the part 3 of this series, but if you are interested in having a go at it, check out &lt;a href="https://improbable.io/blog/thanos-prometheus-at-scale" rel="noopener noreferrer"&gt;thanos&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Hope that this has been simple and easy enough to follow and if you have a Kubernetes cluster, even if it is not on AWS, this Prometheus setup is still relevant and be deployed in any system, with a &lt;code&gt;StorageDriver&lt;/code&gt; configured to automatically create persistent volumes in your infrastructure.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>metrics</category>
      <category>devops</category>
    </item>
    <item>
      <title>Getting started with deploying grafana and cloudwatch metric dashboards</title>
      <dc:creator>Ryan</dc:creator>
      <pubDate>Wed, 02 Sep 2020 06:10:46 +0000</pubDate>
      <link>https://dev.to/mcf/getting-started-in-deploying-grafana-and-prometheus-2ac3</link>
      <guid>https://dev.to/mcf/getting-started-in-deploying-grafana-and-prometheus-2ac3</guid>
      <description>&lt;h1&gt;
  
  
  The Pillar of Metrics
&lt;/h1&gt;

&lt;p&gt;Metrics is one of the key components in observability which is increasingly more important as we adopt more distributed application architectures, monitoring the health of our applications becomes difficult to manage if we don't have an aggregation system in place. If you are just starting on your observability journey, and find justifying for paid SaAS services such as datadog or splunk a tough barrier to overcome, you can easily start by first using open source solutions that can give you a better grasp of how metric collection works, and create dashboards to provide some insights into your current system.&lt;/p&gt;

&lt;p&gt;In this post, we will be going through some quick recipes that help deploy Grafana onto an AWS Elastic Kubernetes Service(EKS) cluster with minimal effort, with dashboards created by the community. So let us get started.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;
The quick 5-minute build

&lt;ul&gt;
&lt;li&gt;What you get&lt;/li&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Cloudwatch IAM Role&lt;/li&gt;
&lt;li&gt;
Grafana

&lt;ul&gt;
&lt;li&gt;Installing Grafana&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Show me the UI!&lt;/li&gt;
&lt;li&gt;Whats next?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
The 10-minute build

&lt;ul&gt;
&lt;li&gt;What you get&lt;/li&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Postgres RDS&lt;/li&gt;
&lt;li&gt;Grafana&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Logging in&lt;/li&gt;
&lt;li&gt;Wrapping up&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster, preferably AWS EKS&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;li&gt;Terraform &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The quick 5-minute build
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What you get
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Grafana instance&lt;/li&gt;
&lt;li&gt;Cloudwatch metrics&lt;/li&gt;
&lt;li&gt;Cloudwatch dashboards to monitor AWS services(EBS, EC2, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vJ70wriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ryanoolala"&gt;
        ryanoolala
      &lt;/a&gt; / &lt;a href="https://github.com/ryanoolala/recipes"&gt;
        recipes
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A collection of recipes for setting up observability toolings
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
recipes&lt;/h1&gt;
&lt;p&gt;A collection of recipes for setting up resources in AWS and EKS&lt;/p&gt;
&lt;p&gt;This is my attempt of trying to introduce observability tools to people and providing a recipe for them to add them into their infrastructure as easily as possible, as such you may find that most of these setups may be too simple for your production needs(e.g HA consideration, maintenance processes), and if I am able to think of ways to make these better and able to simplify into recipes, I will update this repository, as a recipe guide for myself in my future setups.&lt;/p&gt;
&lt;h2&gt;
Requirements&lt;/h2&gt;
&lt;p&gt;This repository assumes you already have the following tools installed and required IAM permissions(preferable an admin) to use with terraform&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;terraform &amp;gt;= v0.12.29&lt;/li&gt;
&lt;li&gt;terragrunt &amp;gt;= v0.23.6&lt;/li&gt;
&lt;li&gt;kubectl &amp;gt;= 1.18&lt;/li&gt;
&lt;li&gt;helm &amp;gt;= 3.3.0&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This is not a free tier compatible setup and any costs incurred will be bared by you and you…&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ryanoolala/recipes"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
Clone the recipe repository from &lt;a href="https://github.com/ryanoolala/recipes"&gt;github.com/ryanoolala/recipes&lt;/a&gt;, which I will be using to reference the setup throughout this post
&lt;h3&gt;
  
  
  Cloudwatch IAM Role
&lt;/h3&gt;

&lt;p&gt;We first create an IAM role with permissions to get metrics from cloudwatch, and to speed things up we'll be using terraform to provision the role and in my case, I'll be making use of &lt;a href="https://github.com/ryanoolala/recipes/blob/master/metrics/grafana/5min/terraform/cloudwatch-role/terragrunt.hcl"&gt;terragrunt&lt;/a&gt;, but you can easily copy the inputs and into a terraform module variable input instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"cloudwatch-iam"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://gitlab.com/govtechsingapore/gdsace/terraform-modules/grafana-cloudwatch-iam?ref=1.0.0"&lt;/span&gt;
  &lt;span class="nx"&gt;allow_role_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;iam&lt;/span&gt;&lt;span class="err"&gt;::&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nx"&gt;ACCOUNT_ID&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;ryan20200826021839068100000001&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ryan"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://gitlab.com/govtechsingapore/gdsace/terraform-modules/grafana-cloudwatch-iam/"&gt;grafana cloudwatch iam&lt;/a&gt; module takes in a EKS ARN role, this is because we want our Grafana application running on the node, to be able to assume this cloudwatch role, and be authorized to pull metrics from AWS APIs. This provides a terraform output of&lt;br&gt;
&lt;code&gt;grafana_role_arn = arn:aws:iam::{{ACCOUNT_ID}}:role/grafana-cloudwatch-role-ryan&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Grafana
&lt;/h3&gt;

&lt;p&gt;Here is where it gets interesting, we will be deploying Grafana using helm 3. Make sure you have your &lt;code&gt;kubectl&lt;/code&gt; context set to the cluster you want to host this service on, and that it also belongs to the same AWS account which we just created the IAM role.&lt;/p&gt;

&lt;p&gt;We create a datasource.yaml file with the following values, be sure to replace &lt;code&gt;assumeRoleArn&lt;/code&gt; with your output from above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# file://datasource.yaml&lt;/span&gt;
&lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;datasources.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cloudwatch&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloudwatch&lt;/span&gt;
        &lt;span class="na"&gt;isDefault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;jsonData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;authType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn&lt;/span&gt;
          &lt;span class="na"&gt;assumeRoleArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:iam::{{ACCOUNT_ID}}:role/grafana-cloudwatch-role-ryan"&lt;/span&gt;
          &lt;span class="na"&gt;defaultRegion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ap-southeast-1"&lt;/span&gt;
          &lt;span class="na"&gt;customMetricsNamespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="c1"&gt;# &amp;lt;bool&amp;gt; allow users to edit datasources from the UI.&lt;/span&gt;
    &lt;span class="na"&gt;editable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will allow grafana to start with a cloudwatch datasource that is set to use &lt;code&gt;assumeRoleArn&lt;/code&gt; for retrieving cloudwatch metrics.&lt;/p&gt;

&lt;h4&gt;
  
  
  Installing Grafana
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana stable/grafana &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/ryanoolala/recipes/blob/master/metrics/grafana/5min/k8s/grafana/values.yaml &lt;span class="nt"&gt;-f&lt;/span&gt; datasource.yaml &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; grafana
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or if you have cloned the repository, place &lt;code&gt;datasource.yaml&lt;/code&gt; into &lt;code&gt;./metrics/grafana/5min/k8s/grafana&lt;/code&gt; and run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ./metrics/grafana/5min/k8s/grafana &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make install.datasource
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In a few moments, you will have a grafana running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod &lt;span class="nt"&gt;-n&lt;/span&gt; grafana
NAME                          READY   STATUS     RESTARTS   AGE
grafana-5c58b66f46-9dt2h      2/2     Running    0          84s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and to get access to the dashboard, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl port-forward svc/grafana &lt;span class="nt"&gt;-n&lt;/span&gt; grafana 8080:80
Forwarding from 127.0.0.1:8080 -&amp;gt; 3000
Forwarding from &lt;span class="o"&gt;[&lt;/span&gt;::1]:8080 -&amp;gt; 3000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Show me the UI!
&lt;/h3&gt;

&lt;p&gt;Navigate to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt; and you will see your Grafana UI&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9SMPv1l7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ou5oiax3ikxr6z1b8wf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9SMPv1l7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ou5oiax3ikxr6z1b8wf1.png" alt="Grafana SQS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are wondering where these dashboards are loaded, I found them on &lt;a href="https://grafana.com/grafana/dashboards?dataSource=cloudwatch"&gt;grafana's dashboard site&lt;/a&gt;, picked a few of them, and loaded them by configuring &lt;code&gt;values.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# https://github.com/ryanoolala/recipes/blob/master/metrics/grafana/5min/k8s/grafana/values.yaml#L364&lt;/span&gt;

&lt;span class="na"&gt;dashboards&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;aws-ec2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/617/revisions/4/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-ebs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/11268/revisions/2/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-cloudwatch-logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/11266/revisions/1/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-rds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/11264/revisions/2/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-api-gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/1516/revisions/10/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-route-53&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/11154/revisions/4/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-ses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/1519/revisions/4/download&lt;/span&gt;
    &lt;span class="na"&gt;aws-sqs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://grafana.com/api/dashboards/584/revisions/5/download&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Whats next?
&lt;/h2&gt;

&lt;p&gt;There are some limitations to what we have just deployed, while the UI allows to edit and even add new dashboards, the changes we make are not persistent, since we did not provide any persistent store for this setup. Let's make it better!&lt;/p&gt;

&lt;h1&gt;
  
  
  The 10-minute build
&lt;/h1&gt;

&lt;p&gt;To save our changes, there are several ways to do so, the easiest probably being attaching a block store(EBS) volume to the instance, and have settings stored on the disk. However as EBS is not a &lt;code&gt;ReadWriteMany&lt;/code&gt; storage driver, we cannot scale-out our Grafana instance across availability zones and different EKS nodes. The next easiest solution, in my opinion, will be to make use of AWS Relational Database Service(RDS), which is fully managed, with automatic backups and High Availability(HA), as our persistence layer for Grafana. &lt;/p&gt;

&lt;h2&gt;
  
  
  What you get
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;HA Grafana with persistence&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Postgres RDS
&lt;/h3&gt;

&lt;p&gt;We will be using postgres in this example, although Grafana supports MySQL and sqlite3 as well. In order to not digress, I will omit the setup instructions for the database, if you will like to know how I used terraform to deploy the instance, you may read up more in my &lt;a href="https://github.com/ryanoolala/recipes/tree/master/metrics/grafana#10min-setup"&gt;README&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are not familiar with terraform, this might get slightly complicated, thus I will suggest that you create the postgres using the AWS console which will be much easier and faster, to keep it under the 10min effort required for this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana
&lt;/h3&gt;

&lt;p&gt;Now that we have a postgres database setup, we will create a kubernetes secret object to contain the credentials needed for connecting to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ./metrics/grafana/10min/k8s/grafana
&lt;span class="nv"&gt;$ &lt;/span&gt;make secret
Removing old grafana-db-connection...
secret &lt;span class="s2"&gt;"grafana-db-connection"&lt;/span&gt; deleted
Postgres Host?: 
mydbhost.com
Postgres Username?: 
myuser
Postgres Password? &lt;span class="o"&gt;(&lt;/span&gt;keys will not show up &lt;span class="k"&gt;in &lt;/span&gt;the terminal&lt;span class="o"&gt;)&lt;/span&gt;: 
Attempting to create secret &lt;span class="s1"&gt;'grafana-db-connection'&lt;/span&gt;...
secret/grafana-db-connection created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This secret &lt;code&gt;grafana-db-connection&lt;/code&gt; will be used in our &lt;code&gt;values.yaml&lt;/code&gt; and we will also set the environment &lt;code&gt;GF_DATABASE_TYPE&lt;/code&gt; to &lt;code&gt;postgres&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# https://github.com/ryanoolala/recipes/blob/cf7839e9e919735c72fee77450d891f8ee13ef17/metrics/grafana/10min/k8s/grafana/values.yaml#L268&lt;/span&gt;
&lt;span class="c1"&gt;## Extra environment variables that will be pass onto deployment pods&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;GF_DATABASE_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres"&lt;/span&gt;

&lt;span class="c1"&gt;# https://github.com/ryanoolala/recipes/blob/cf7839e9e919735c72fee77450d891f8ee13ef17/metrics/grafana/10min/k8s/grafana/values.yaml#L282&lt;/span&gt;
&lt;span class="na"&gt;envFromSecret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grafana-db-connection"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With these changes done, we can upgrade our current deployed Grafana using &lt;code&gt;helm upgrade grafana stable/grafana -f values.yaml --namespace grafana&lt;/code&gt;, or if you are starting from a fresh setup,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana stable/grafana &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/ryanoolala/recipes/blob/master/metrics/grafana/10min/k8s/grafana/values.yaml &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; grafana
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;or if you have cloned the repository, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ./metrics/grafana/10min/k8s/grafana &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; make &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This new Grafana will allow you to make changes to the system, add datasources and dashboards, and save these changes in the database so you don't have to worry about your instance restarting and having to start all over again.&lt;/p&gt;

&lt;h1&gt;
  
  
  Logging in
&lt;/h1&gt;

&lt;p&gt;To make edits, you have to login using the admin user, the default username and password can be set during installation, by modifying the following in &lt;code&gt;values.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Administrator credentials when not using an existing secret (see below)&lt;/span&gt;
&lt;span class="na"&gt;adminUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
&lt;span class="na"&gt;adminPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;strongpassword&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After Grafana has been started, we can change our password on the UI instead, and the new password will be stored in database for future login sessions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;Hopefully, this gave you an idea of how you can make use of the grafana helm chart and configure it to display cloudwatch metric dashboards. &lt;/p&gt;

&lt;p&gt;In the next part, I will share more about deploying prometheus, which will provide us with more insights within the kubernetes cluster, including CPU/RAM usage of the ec2 nodes, as well as pods. These pieces of information will help us better understand our deployed applications.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Gitlab MR Bot: Getting people to do code reviews</title>
      <dc:creator>Kai Hong</dc:creator>
      <pubDate>Mon, 03 Aug 2020 05:09:07 +0000</pubDate>
      <link>https://dev.to/mcf/gitlab-mr-bot-getting-people-to-do-code-reviews-29hi</link>
      <guid>https://dev.to/mcf/gitlab-mr-bot-getting-people-to-do-code-reviews-29hi</guid>
      <description>&lt;p&gt;Hello World, recently at MyCareersFuture Team, we have a growing stack of merge requests (MR) that are missing the code review attention it needs to go into main branch. This slows down our delivery process as our working agreement is that every merge requires &lt;em&gt;at least two other developers&lt;/em&gt; to approve first.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It's a long weekend, let's do something about it.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;: Get more eyes on the MRs ready for review.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  MR reviews in MCF
&lt;/h3&gt;

&lt;p&gt;Yes we &lt;em&gt;really&lt;/em&gt; do review everything that goes in.&lt;/p&gt;

&lt;p&gt;High quality code is important to us, and code reviews is one of the ways we uphold that standard. By ensuring at least two developers review each MR, we can catch potential bugs and inefficiency early in the cycle. &lt;/p&gt;

&lt;p&gt;This also facilitates knowledge transfer between devs, which also translates to better code being written subsequently. All parties including the stakeholders understand the additional overhead to this process and have accepted it as we are aligned to the goal of delivering a quality product.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why aren't more people reviewing?
&lt;/h3&gt;

&lt;p&gt;Slack is our currently communication channel, and when a MR is ready for review, it is labelled for review, and posted to the channel for anyone to pick it up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sometimes all devs are just busy and missed the messages&lt;/li&gt;
&lt;li&gt;Sometimes the request get buried among other conversations&lt;/li&gt;
&lt;li&gt;Other factors like unfamiliarity with various codebases&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Temporary solution
&lt;/h4&gt;

&lt;p&gt;For the past 2 sprints or so, our kind scrum master has been helping to manually consolidate the various MRs and pinging the channel for people to take action on those items. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;And it works!&lt;/em&gt; More devs have been noticeably more active in  MR reviews ever since she started doing that. &lt;/p&gt;

&lt;p&gt;So... let's automate that! 😉&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing the MR Bot
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; &lt;code&gt;consolidate&lt;/code&gt; a list of &lt;code&gt;opened&lt;/code&gt; merge requests that have a label of &lt;code&gt;review me&lt;/code&gt; across &lt;code&gt;multiple repositories&lt;/code&gt; &lt;code&gt;daily&lt;/code&gt; and &lt;code&gt;notify on slack&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We are using a self-hosted version of Gitlab, so we rely on labels for approval. For example, if I want to review a specific MR, I will add my label &lt;code&gt;Review by Kai Hong&lt;/code&gt;, and subsequently &lt;code&gt;Approved by Kai Hong&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The bot would only run once a day to prevent it from being &lt;em&gt;spammy&lt;/em&gt; because, we wouldn't want people muting the channel do we? Since this runs daily, it sounds a lot more like a cronjob than a long running service. So let's model it as a batch process and build it!&lt;/p&gt;

&lt;p&gt;A typical batch process chains a bunch of processors, where the output of one processor will be the input of the next processor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;Input -&amp;gt; Process -&amp;gt; Output&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's take a look at the overview of the entire process before going into details. This "bot" is built with native NodeJS without any frameworks because it's only dealing with network requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fetchTasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;GITLAB_PROJECT_ID_LIST&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;fetchMergeRequests&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fetchTasks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="c1"&gt;// Part 1&lt;/span&gt;
  &lt;span class="c1"&gt;// Input: list of fetch tasks&lt;/span&gt;
  &lt;span class="c1"&gt;// Process: resolve all requests&lt;/span&gt;
  &lt;span class="c1"&gt;// Output: list of json responses&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="c1"&gt;// Part 2&lt;/span&gt;
  &lt;span class="c1"&gt;// Input: list of json responses&lt;/span&gt;
  &lt;span class="c1"&gt;// Process: extract/transform relevant data&lt;/span&gt;
  &lt;span class="c1"&gt;// Output: list of processed MR&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;listOfMrList&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;mergeRequests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mr&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;processMr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="c1"&gt;// Part 3&lt;/span&gt;
  &lt;span class="c1"&gt;// Input: list of processed MR&lt;/span&gt;
  &lt;span class="c1"&gt;// Process: send to slack&lt;/span&gt;
  &lt;span class="c1"&gt;// Output: none&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;processedMrList&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;sendToSlack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;processedMrList&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;finally&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`stub end`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`stub error`&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Part 1: Fetch data
&lt;/h3&gt;

&lt;p&gt;The magic Gitlab API URL that allows me to fetch all the merge requests from a repo.&lt;br&gt;
&lt;code&gt;https://&amp;lt;gitlab_url&amp;gt;/api/v4/projects/&amp;lt;id&amp;gt;/merge_requests/state=opened&amp;amp;labels=Review+Me&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In order to fetch from a list of repos, I had to brush up my async/await/promises codefu to make this work as it's not everyday that I try to synchronise a bunch of asynchronous calls to work with a batch process. And it was all solved with &lt;code&gt;Promise.all([])&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kUFnf8L1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5imafchkjp8qynhjtmsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kUFnf8L1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5imafchkjp8qynhjtmsl.png" alt="Promise all diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It helps to consolidate all of the promises into one promise, and you just have to handle the output of that one, which made the batch process a lot simpler.&lt;/p&gt;
&lt;h3&gt;
  
  
  Part 2: Transform data
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;processMr()&lt;/code&gt; is a very simple function that helps to extract and transform the data into the relevant fields.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processMr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;updatedOn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;updated_at&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nx"&gt;toDateString&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reviewers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Review Me&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;mergeRequestName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;mergeRequestUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;For batch process design, it's important to keep your processors decoupled so that it's easy to switch them out when required. Imagine the power you could wield if you had a collection of processors that you can combine and tear apart as you wish.&lt;/p&gt;
&lt;h3&gt;
  
  
  Part 3: Send notification
&lt;/h3&gt;

&lt;p&gt;Slack uses webhooks for posting to channels. Given the processed data, it's simple to build a POST request according to &lt;a href="https://api.slack.com/block-kit"&gt;Slack's block kit design&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;SLACK_WEBHOOK_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;slackPostOptions&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;However, the troublesome part is actually building the payload itself. Would not go into details as it's dependent on Slack's documentation, but it's a fun exercise in composing JSON objects.&lt;/p&gt;

&lt;p&gt;Example payload&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;blocks:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'section'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;text:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'section'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;text:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'section'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;text:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Deploying it
&lt;/h2&gt;

&lt;p&gt;Nearly all of our services are hosted on AWS EKS (Kubernetes). So we have to dockerize this service, which is as simple as this.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:lts-alpine as base&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apk update &lt;span class="nt"&gt;--no-cache&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apk upgrade &lt;span class="nt"&gt;--no-cache&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; TZ="UTC"&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "app.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;This goes in as a cronjob for our EKS. It's set to run every weekday at 1pm.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0 5 * * 1-5&lt;/span&gt;
  &lt;span class="na"&gt;concurrencyPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Forbid&lt;/span&gt;
  &lt;span class="na"&gt;failedJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;successfulJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;backoffLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# ...&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
          &lt;span class="na"&gt;imagePullSecrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# ...&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gitlab-slackbot&lt;/span&gt;
              &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycfsg/gitlab-slackbot:latest&lt;/span&gt;
              &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
              &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_URL&lt;/span&gt;
                  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;URL&amp;gt;&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_PROJECT_ID_LIST&lt;/span&gt;
                  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[&amp;lt;ID_LIST&amp;gt;]"&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_MR_OPTIONS&lt;/span&gt;
                  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;state=opened&amp;amp;labels=Review+Me&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_MR_REVIEWERS_NUM&lt;/span&gt;
                  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SLACK_WEBHOOK_URL&lt;/span&gt;
                  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;WEBHOOK_URL&amp;gt;&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_TOKEN&lt;/span&gt;
                  &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gitlab-slackbot&lt;/span&gt;
                      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GITLAB_TOKEN&lt;/span&gt;
              &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;150m&lt;/span&gt;
                  &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;150Mi&lt;/span&gt;
                &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
                  &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What does it look like?
&lt;/h2&gt;

&lt;p&gt;There are two types of design, one where it includes more details, and a compact one that gets straight to the point. It's just a POC/MVP at this point and it will be further refined based on feedback from the team.&lt;/p&gt;
&lt;h4&gt;
  
  
  Normal design
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8U6J5uXy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2l0jqi9a7gjlcpnj1uef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8U6J5uXy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2l0jqi9a7gjlcpnj1uef.png" alt="Normal design"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Compact design
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V2tIbhpS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nd00smmddj1ih28xdinf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V2tIbhpS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nd00smmddj1ih28xdinf.png" alt="Compact design"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Moving on
&lt;/h2&gt;

&lt;p&gt;Remember what I mentioned earlier about how the MR process is just one of the methods of maintaining high quality codebases?&lt;/p&gt;

&lt;p&gt;So what happens after the MR is approved? For that, we have something called the "chicken" process to help us with having a more stable pipeline.&lt;/p&gt;


&lt;div class="ltag__link"&gt;
  &lt;a href="/szenius" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vum6rvc1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--r9CNk-ck--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/366970/a5101a4f-5a64-42cc-90d6-eec60e177d98.jpg" alt="szenius image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/mcf/the-chicken-process-how-we-tackled-a-lack-of-quality-ownership-103b" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;The "Chicken" Process: How we tackled a lack of Quality Ownership&lt;/h2&gt;
      &lt;h3&gt;Sze Ying 🌻 ・ May 16 ・ 6 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#agile&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#quality&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#codequality&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;p&gt;extra notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MR growing because our team is growing in size&lt;/li&gt;
&lt;li&gt;This is really more of a glorified reminder than a bot&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>bot</category>
      <category>gitlab</category>
      <category>codequality</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Automating Sprint Review Slides with Pandoc</title>
      <dc:creator>Dickson Tan</dc:creator>
      <pubDate>Fri, 12 Jun 2020 13:51:45 +0000</pubDate>
      <link>https://dev.to/mcf/automating-sprint-review-slides-with-pandoc-383b</link>
      <guid>https://dev.to/mcf/automating-sprint-review-slides-with-pandoc-383b</guid>
      <description>&lt;p&gt;In the MyCareersFuture team, we conduct sprint reviews at the end of the sprint to showcase what the team has achieved. We use PowerPoint slides to highlight stories that have been done and those which are still being worked on. Preparing this by hand was time-consuming. Here's how we automated the process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting Story Data&lt;/li&gt;
&lt;li&gt;Generating PowerPoint Slides&lt;/li&gt;
&lt;li&gt;Customizing Appearance of Generated Slides&lt;/li&gt;
&lt;li&gt;Limitations&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Story Data &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We use Pivotal Tracker for project management. The Pivotal Tracker API returns stories in the current iteration or sprint via the &lt;a href="https://www.pivotaltracker.com/help/api/rest/v5#projects_project_id_iterations_get"&gt;&lt;code&gt;/projects/{project_id}/iterations?scope=current_backlog&amp;amp;limit=1&lt;/code&gt;&lt;/a&gt; API call. The response looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"finish"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-05-19T00:00:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"iteration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-05-04T00:00:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"stories"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"story"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;563&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-05-19T12:01:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"updated_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2020-05-19T12:01:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"estimate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"story_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"feature"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Complete construction of the Expeditionary Battle Planetoid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Palpatine was impressed with the PoC, make this one bigger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"current_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"accepted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;lots&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;more&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;story&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;other&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;stories&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;sprint&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Generating PowerPoint Slides &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Programmatically generating a PowerPoint file isn't for the faint of heart even in a language that has a library for the task. This is where Pandoc comes to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pandoc.org/"&gt;Pandoc&lt;/a&gt; is an open source tool that converts between almost any markup format that you can think of, including from markdown to PowerPoint. By using the Pivotal API, I wrote a tool that outputs markdown source which Pandoc then converts into the actual slides. Here's what the markdown might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sprint&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;63"&lt;/span&gt;
&lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;18 Sep - 02 Oct&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gu"&gt;## Sprint backlog - Done!&lt;/span&gt;

&lt;span class="ge"&gt;*Jobseeker*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Create a service/API to serve constants to all other modules (5 points)
&lt;span class="p"&gt;*&lt;/span&gt; change redux code to Typescript for ui-jobseeker
&lt;span class="p"&gt;*&lt;/span&gt; Populate the content placeholder on the category landing page with the html content (batch 2)
&lt;span class="p"&gt;*&lt;/span&gt; Implement page title and meta description for category landing page (batch 2)
&lt;span class="p"&gt;*&lt;/span&gt; Revise content piece, page title and meta description for Risk Management category landing page 

&lt;span class="ge"&gt;*Employer*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Add "Employers Toolkit" link in Employer navbar

&lt;span class="gu"&gt;## Jobseeker - In Progress Features&lt;/span&gt;

&lt;span class="ge"&gt;*Delivered*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Implement VDP link for MCF jobseeker and make changes to footer

&lt;span class="ge"&gt;*Finished*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Implement GA tracking on the recommended jobs on the application confirmation page
&lt;span class="p"&gt;*&lt;/span&gt; Switch to MCF figures for application count of all jobs
&lt;span class="p"&gt;*&lt;/span&gt; Include "Search term" in recommender API
&lt;span class="p"&gt;*&lt;/span&gt; Implement recommended jobs on JD page (getting the jobs from DSAID API)
&lt;span class="p"&gt;*&lt;/span&gt; Implement data science tracking for view JD page
&lt;span class="p"&gt;*&lt;/span&gt; Set up company profile page
&lt;span class="p"&gt;*&lt;/span&gt; Implement data science tracking for the jobs returned on each page of the search page

&lt;span class="ge"&gt;*Started*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Change copy for government scheme section on JD page and search results page
&lt;span class="p"&gt;*&lt;/span&gt; Implement recommended jobs on JD page (frontend)
&lt;span class="p"&gt;*&lt;/span&gt; Company profile page - jobs section
&lt;span class="p"&gt;*&lt;/span&gt; Company profile page - About the company section
&lt;span class="p"&gt;*&lt;/span&gt; Company profile page meta description
&lt;span class="p"&gt;*&lt;/span&gt; Take contact number and email address from MCF instead of MySF 

&lt;span class="gu"&gt;## Jobseeker - In Progress Chores&lt;/span&gt;

&lt;span class="ge"&gt;*Started*&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Improve stability of job application tests
&lt;span class="p"&gt;*&lt;/span&gt; Jobseeker: Cypress Tests for GA calls
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Saving that to a file named &lt;code&gt;presentation.md&lt;/code&gt;, the following Pandoc invocation generates &lt;code&gt;presentation.pptx&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;pandoc presentation.md &lt;span class="nt"&gt;-o&lt;/span&gt; presentation.pptx &lt;span class="nt"&gt;--self-contained&lt;/span&gt; &lt;span class="nt"&gt;--reference-doc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;template.pptx &lt;span class="nt"&gt;--slide-level&lt;/span&gt; 2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When there is too much content to fit into a slide or if ad hoc changes need to be made, the generated markdown and PowerPoint file can be edited afterwards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7VjKm3NB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520title%2520slide.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7VjKm3NB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520title%2520slide.png" alt="Sprint 63 Title Slide with a main title of Sprint 63 and subtitle 18 Sep - 02 Oct"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ItM1tIfv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520done%2520stories.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ItM1tIfv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520done%2520stories.png" alt="Sprint 63 slide for done stories. Refer to the markdown source above for its contents"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R0O8V-Ts--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520jobseeker%2520in%2520progress%2520features.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R0O8V-Ts--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/sprint%252063%2520jobseeker%2520in%2520progress%2520features.png" alt="Sprint 63 slide for in-progress Jobseeker feature stories. Refer to the markdown source above for its contents"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The references to "Jobseeker" and "Employer" are the 2 squads in MCF which implement the jobseeker and employer facing portions of the product respectively.&lt;/p&gt;

&lt;p&gt;Here's how markdown's semantics naturally map to slides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lines 2 and 3 in the source above specify metadata about the document, title and date in this case. This is part of Pandoc's support for front matter and causes a title slide to be generated with this information.&lt;/li&gt;
&lt;li&gt;Pandoc was configured to use a slide level of 2. Hence, the level 2 headings are used as slide titles of new slides.&lt;/li&gt;
&lt;li&gt;Text between level 2 headings appear on slides as content. Any markdown formatting there such as lists and emphasis causes the corresponding formatting changes in the output.&lt;/li&gt;
&lt;li&gt;A line with 3 consecutive dashes (&lt;code&gt;---&lt;/code&gt;) causes a new slide to begin after that point. This isn't used in the example above but is very useful when content needs to be split between slides without creating a title for each slide. Refer to &lt;a href="https://pandoc.org/MANUAL.html#producing-slide-shows-with-pandoc"&gt;Pandoc's user guide&lt;/a&gt; for other directives to insert pauses, incremental lists and speaker notes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Customizing Appearance of Generated Slides &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;--reference-doc=template.pptx&lt;/code&gt; argument instructs Pandoc to use a PowerPoint template which provides some control over layout and formatting. Although Pandoc technically supports using your own templates, Pandoc crashed often when I tried doing so&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Hence, I recommend exporting a copy of Pandoc's default PowerPoint template and using it as a starting point for customization with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# writes Pandoc's default PowerPoint template to template.pptx&lt;/span&gt;
pandoc &lt;span class="nt"&gt;-o&lt;/span&gt; template.pptx &lt;span class="nt"&gt;--print-default-data-file&lt;/span&gt; reference.pptx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Open this file in PowerPoint and under the view ribbon, activate slide master view.&lt;/p&gt;

&lt;p&gt;The first slide is the master slide. Changing the formatting of the title and text at various list levels applies the corresponding formatting when used to generate slides, a poor man's CSS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJESdeub--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/slide%2520master.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJESdeub--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/slide%2520master.png" alt="The Slide Master for our sprint review slides, which lets you adjust formatting for slide titles and text in list levels 1 through 6"&gt;&lt;/a&gt;&lt;br&gt;Here's the Slide Master for our sprint review slides set to use the Arial font instead of the default Calibri. Notice how this matches with the content slides above.
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qxU6tdor--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/title%2520slide.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qxU6tdor--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://neurrone.com/images/title%2520slide.png" alt="The title slide for our sprint review slides. It can have its main title, subtitle and date placeholder fields customized"&gt;&lt;/a&gt;&lt;br&gt;Similarly, here's our Title Slide layout.
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The format of content was kept as simple as possible - essentially a title and content in lists. Pandoc's PowerPoint writer technically supports a 2-column layout and the use of tables. However, we ran into issues because we could not adequately customize the formatting applied to our needs. For instance, each column in a table could not have different widths causing text to often go offscreen.&lt;/p&gt;

&lt;p&gt;Besides PowerPoint, Pandoc also supports creating PDF slides via LaTeX and HTML5 slides via Slideous, Slidy, DZSlides or reveal.js. These formats allow virtually full customization of the generated slides, including the use of JavaScript and CSS for HTML5 presentations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We've been automating most of the preparation for our sprint review slides this way for almost a year now, saving an hour each sprint. If you're preparing slides with information from a system that exposes an API or prefer the text editor like I do, I highly recommend trying Pandoc out.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;this seems to happen if the template file is not compatible with the set of XML directives Pandoc recognizes for PowerPoint files. For example, Pandoc wouldn't accept the template file from a set of slides prepared with Google Slides. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>automation</category>
      <category>markdown</category>
      <category>pandoc</category>
      <category>presentations</category>
    </item>
    <item>
      <title>The "Chicken" Process: How we tackled a lack of Quality Ownership</title>
      <dc:creator>Sze Ying 🌻</dc:creator>
      <pubDate>Sat, 16 May 2020 09:38:46 +0000</pubDate>
      <link>https://dev.to/mcf/the-chicken-process-how-we-tackled-a-lack-of-quality-ownership-103b</link>
      <guid>https://dev.to/mcf/the-chicken-process-how-we-tackled-a-lack-of-quality-ownership-103b</guid>
      <description>&lt;p&gt;A few months ago, we noticed that our sprint spillovers were constantly increasing in size. Upon taking a closer look, we realised that our existing workflow had an unhealthy emphasis on the number of merges into master. This emphasis was causing a false sense of development efficiency while introducing inefficiency in our delivery process.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Problem: We left Quality Ownership to our Quality Engineer (QE)
&lt;/h1&gt;

&lt;p&gt;A closer look at our agile workflow signalled that our Quality ownership was not evenly distributed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Delivery Stage&lt;/th&gt;
&lt;th&gt;People involved&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Story refinement and prioritisation&lt;/td&gt;
&lt;td&gt;Product Owners, Devs, QE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Development of story on a new Merge Request (MR)&lt;/td&gt;
&lt;td&gt;Devs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Code reviews&lt;/td&gt;
&lt;td&gt;Devs, QE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Merge story into master branch&lt;/td&gt;
&lt;td&gt;Devs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ensure master pipeline runs to completion for story&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;QE&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We use GitLab's CI service for our automated pipeline, which runs on every new commit to the master branch. It runs our unit and integration tests before deploying the new build to our QA and UAT environments.  &lt;/p&gt;

&lt;p&gt;In an ideal world, this would be the perfect setup — after all, everything was ~automated~. But here comes the problem: &lt;strong&gt;our integration tests were not stable&lt;/strong&gt;. With this workflow, the burden fell on our QE to investigate and resolve each integration test failure. This task became exponentially harder with multiple MRs being merged at once by different devs, as well as the existence of undocumented flaky tests. It also didn't help that at that time, our devs to QE ratio was 5:1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1DDl3rFSMd6J7JuxdUHtf_Oa8QlERF4Ll" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1DDl3rFSMd6J7JuxdUHtf_Oa8QlERF4Ll" alt="Our old process made our QE very frustrated indeed"&gt;&lt;/a&gt;&lt;/p&gt;
So many merges, yet not one green build...



&lt;p&gt;This caused a bottleneck in our delivery process, which became especially visible at the start of each sprint, when we looked at the spillover from the previous sprint. Our spillover kept growing with majority of the stories stuck in the testing phase, but since the devs had "finished" all our work in the previous sprint, we continued to take up new stories. &lt;/p&gt;

&lt;p&gt;We were now taking up new stories every sprint, but they were not being delivered to production at the same rate to create value for our users.&lt;/p&gt;

&lt;h1&gt;
  
  
  The "Chicken" process
&lt;/h1&gt;

&lt;p&gt;Our old process went on for a while, frequently causing us to go weeks at once without a single green build. After many sprints of digging through numerous merges to debug pipeline failures, Jin Jie (the pained QE mentioned above) and Dickson (a developer who was helping our poor QE) from our team came up with the "Chicken" process. These were the rules for the new process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;One Merge at a time&lt;/strong&gt; — Only one MR should be merged into master at any time, with the author of this MR being the "Chicken". &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Chicken" owns the pipeline&lt;/strong&gt; — The "Chicken" has to ensure that our CI pipeline runs to completion with a green build for their new merge. If there are any genuine integration test failures due to the new merge, open new MRs to fix them. The "Chicken" is free to open and merge as many MRs needed for a green build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ownership lies with the "Chicken", but Responsibility is shared by the team&lt;/strong&gt; — The "Chicken" is not expected to fix all pipeline failures by themselves. If there are too many integration test failures, the "Chicken" is free to ask for help from the rest of the team.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1Q3647y56is3U6MJhUsadSxLcdUqrie7z" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1Q3647y56is3U6MJhUsadSxLcdUqrie7z" alt="Our residential Chicken"&gt;&lt;/a&gt;&lt;/p&gt;
Fun fact: The "Chicken" process was named after our residential chicken squeeze toy, which was used to get everyone's attention in the past. These days, it is a real life semaphore for merging into our master branch!



&lt;p&gt;We have been trying out this new process for a few months now, have observed some significant benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality is now a Shared Responsibility
&lt;/h2&gt;

&lt;p&gt;Essentially, the goal of this new process is to eliminate the &lt;em&gt;Somebody Else's Problem&lt;/em&gt; effect. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The Somebody Else's Problem (SEP) field... relies on people's natural predisposition not to see anything they don't want to, weren't expecting, or can't explain"&lt;/p&gt;

&lt;p&gt;— Douglas Adams in Life, the Universe and Everything&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The "Chicken" process helped to mitigate the SEP field in our team with the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Greater visibility on our processes&lt;/strong&gt; — Now that the definition of "Finish" includes ensuring that the corresponding merge passes the full pipeline, we are more inclined to learn what happens after merging our MRs into the master branch. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More context of the codebase&lt;/strong&gt; — Investigating the codebase to fix tests gave us the opportunity to learn more about some of the long-lost contextual knowledge behind the implementation of older components. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Rather than having a single QE shoulder the quality burden, quality ownership was now a shared responsibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Greater Code Standards
&lt;/h2&gt;

&lt;p&gt;We also saw an improvement in our code standards:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Greater focus on code quality&lt;/strong&gt; — Having to experience the pain of investigating integration test failures, we have become more mindful of whether each MR we author will break existing integration tests. The old "merge and forget" mentality was now thrown out of the window; there is greater focus on writing reliable code and tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discovering old bugs hidden in the codebase&lt;/strong&gt; — With more effort in investigating test failures, we have also uncovered several bugs that were not caught during the development and review processes. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fewer flaky tests&lt;/strong&gt; — We spent a huge amount of time in investigating some test failures, only to realise that they were flaky. In other words, these tests would only sometimes fail, and the failures were largely due to the choice of testing strategy rather than incorrect implementation. This has motivated the team to work together to improve the overall reliability of our tests, rather than just clicking "retry" N times.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Improved Delivery Efficiency
&lt;/h2&gt;

&lt;p&gt;Before the "chicken" process, we used to see 1-2 green builds per two-week sprint. Now, we are seeing 1-2 green builds every two days. On good days, we might even get up to 3 green builds in a day! 🚀 This was all possible due to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Resource reallocation to remove bottleneck&lt;/strong&gt; — Rather than one QE investigating test failures for every single merge, the whole team was now invested to do so together. This allowed us to deliver stories at a rate closer to the rate at which we pick them up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging is easier when the diff is smaller&lt;/strong&gt; — With only one merge going into the master branch at once, it was easier to inspect the diff due to the new merge. This made it a lot quicker to pinpoint where the failure point was when the pipeline failed on a new merge.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1g5rq4B_Xtw6Tl90xL4Xt6ppnRCYfSWVH" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdrive.google.com%2Fuc%3Fid%3D1g5rq4B_Xtw6Tl90xL4Xt6ppnRCYfSWVH" alt="Before vs Now: What merging to master looks like"&gt;&lt;/a&gt;&lt;/p&gt;
Our average number of green builds has increased!






&lt;p&gt;Since adopting this new process, we have also made several modifications to it. One example is allowing multiple merges at once as long as there is still at least one person appointed as the "chicken". This modification helped us reap the benefits of the "chicken" process while also catering to our larger team size. We hope to continue improving on this process as our team's needs evolve. &lt;/p&gt;

&lt;p&gt;Thank you &lt;a href="https://www.linkedin.com/in/ong-jin-jie-4607019b" rel="noopener noreferrer"&gt;Jin Jie&lt;/a&gt; and &lt;a href="https://dev.to/neurrone"&gt;Dickson&lt;/a&gt; for your great efforts in improving our processes and also &lt;a href="https://joeir.net/" rel="noopener noreferrer"&gt;Joseph&lt;/a&gt; for providing feedback on this piece! Also thank you Jin Jie for providing the diagrams.&lt;/p&gt;

&lt;p&gt;If you are now intrigued to work in a team like ours: we are hiring! Drop me an email @ &lt;a href="mailto:ting_sze_ying@tech.gov.sg"&gt;ting_sze_ying@tech.gov.sg&lt;/a&gt; if you are interested to find out more 🌈&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>agile</category>
      <category>codequality</category>
    </item>
    <item>
      <title>Run remote retrospectives using Trello</title>
      <dc:creator>Andrey Bodoev</dc:creator>
      <pubDate>Thu, 07 May 2020 11:39:05 +0000</pubDate>
      <link>https://dev.to/mcf/run-remote-retrospectives-using-trello-30ha</link>
      <guid>https://dev.to/mcf/run-remote-retrospectives-using-trello-30ha</guid>
      <description>&lt;p&gt;Here's quick walkthrough how we run retrospectives using Trello boards. It's fairly simple workflow, and can be adapted with easy. &lt;/p&gt;

&lt;p&gt;Few things to add,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;this approach turns out works well for both remote and in office environments;&lt;/li&gt;
&lt;li&gt;can be highly customized depends on needs;&lt;/li&gt;
&lt;li&gt;I haven't say that in video, but it's removes frictions with post-its, and use cards instead, and creates a possibilities to have issues to discuss in advance, prior to retrospective.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm using &lt;a href="https://www.atlassian.com/team-playbook/plays/retrospective"&gt;Playbook framework&lt;/a&gt; by Atlassian as an example for this board&lt;/p&gt;




&lt;p&gt;If you have any questions, please let me know in comments below. &lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>trello</category>
      <category>atlassian</category>
      <category>retrospective</category>
    </item>
    <item>
      <title>Hidden power of Commit Guidelines</title>
      <dc:creator>Andrey Bodoev</dc:creator>
      <pubDate>Sun, 03 May 2020 10:40:47 +0000</pubDate>
      <link>https://dev.to/mcf/hidden-power-of-commit-guidelines-1b24</link>
      <guid>https://dev.to/mcf/hidden-power-of-commit-guidelines-1b24</guid>
      <description>&lt;p&gt;In our projects we adapted Commit Guidelines, with more or less standard variation of &lt;a href="https://github.com/angular/angular.js/blob/master/DEVELOPERS.md#-git-commit-guidelines"&gt;Angular Commit Guidelines&lt;/a&gt;. It deliver what it promise, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This leads to more readable messages that are easy to follow when looking through the project history. But also, we use the git commit messages to generate the change log.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since we adapt such guidelines, I have discovered one powerful effect on a developer growth.&lt;/p&gt;

&lt;p&gt;It's how you start &lt;strong&gt;thinking about code changes&lt;/strong&gt; you commit to code base. Following questions starts to bubbling inside your head, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does this change belong to this commit?&lt;/li&gt;
&lt;li&gt;What's clear intent of your changes in?&lt;/li&gt;
&lt;li&gt;What's reasoning and thoughts I can put in message?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And etc.&lt;/p&gt;

&lt;p&gt;What happens now is that each commit is representing some &lt;strong&gt;type of change&lt;/strong&gt;, with describing clear intent encapsulated inside of commit message. &lt;/p&gt;

&lt;p&gt;Suddenly, you start reading &lt;code&gt;git log&lt;/code&gt; (yes, for real), and if you need to do comparison between log histories you can do this by simply looking at the titles of commit messages.&lt;/p&gt;

&lt;p&gt;Here's one example,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log --pretty="%n    %s" --name-only

    test: has Cancel button, to check both confirm branches

cypress/integration/FunctionalTesting_Suite/CompanyProfilePage/company_profile_page.spec.js

    refactor: move Cancel button to CompanyProfile components

src/components/CompanyProfile/CancelButtonWithConfirmation.scss
src/components/CompanyProfile/CancelButtonWithConfirmation.tsx
src/components/CompanyProfile/CancelButtonWithConfirmationContainer.tsx
src/pages/CompanyProfile/CompanyProfile.tsx

    feat: Employer - Company profile page Cancel button

src/pages/CompanyProfile/CancelButtonWithConfirmation.scss
src/pages/CompanyProfile/CancelButtonWithConfirmation.tsx
src/pages/CompanyProfile/CancelButtonWithConfirmationContainer.tsx
src/pages/CompanyProfile/CompanyProfile.tsx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;git log&lt;/code&gt; you can tell, that I finished feature, did some refactoring, and added integration tests afterwards. That was thoughtful workflow.&lt;/p&gt;

&lt;p&gt;Can you tell the same story, by looking at example below?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git log --pretty="%n    %s" --name-only

    Changed scss

src/pages/CompanyProfile/CancelButtonWithConfirmation.scss

    Add Cancel button

src/pages/CompanyProfile/CancelButtonWithConfirmation.scss

    OK it doesn't work, forgot component. LOL

src/pages/CompanyProfile/CancelButtonWithConfirmation.scss
src/pages/CompanyProfile/CancelButtonWithConfirmation.tsx

    Tests

src/pages/CompanyProfile/CancelButtonWithConfirmationContainer.tsx
src/pages/CompanyProfile/CompanyProfile.tsx
cypress/integration/FunctionalTesting_Suite/CompanyProfilePage/company_profile_page.spec.js

    Is it working yet?

src/pages/CompanyProfile/CompanyProfile.tsx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Of course, you may not look at this without crying. You may end up,  shamefully do &lt;code&gt;git rebase&lt;/code&gt; to squash your commits to hide your crimes of uncertainty. &lt;/p&gt;




&lt;p&gt;To start &lt;a href="https://www.thoughtworks.com/radar/tools?blipid=201911081"&gt;adopting&lt;/a&gt; commit guidelines, I would recommend to look at this project &lt;a href="http://commitizen.github.io/cz-cli/"&gt;http://commitizen.github.io/cz-cli/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>angular</category>
      <category>commitizen</category>
      <category>guideline</category>
    </item>
    <item>
      <title>Code Review: Prereview checklist. No important tasks are forgotten.</title>
      <dc:creator>Andrey Bodoev</dc:creator>
      <pubDate>Sun, 26 Apr 2020 06:26:36 +0000</pubDate>
      <link>https://dev.to/mcf/code-review-prereview-checklist-no-important-tasks-are-forgotten-4ekh</link>
      <guid>https://dev.to/mcf/code-review-prereview-checklist-no-important-tasks-are-forgotten-4ekh</guid>
      <description>&lt;p&gt;Here's my rephrase on &lt;a href="https://en.wikipedia.org/wiki/Preflight_checklist"&gt;preflight checklist&lt;/a&gt;,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a &lt;strong&gt;prereview checklist&lt;/strong&gt; is a list of tasks that should be performed by developer or team prior to raise an Merge Request &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Everytime we discussing changes in code review, we might found ourself talking about same things over and over again. For example, asking in comments to add testing instructions or adding pivotal story number for reference. &lt;/p&gt;

&lt;p&gt;To prevent that we need a little reminder to ourselves, or list of things we should check before raising an attention to your changes.&lt;/p&gt;

&lt;p&gt;In our team we have checklists built-in in our &lt;a href="https://docs.gitlab.com/ee/user/project/description_templates.html"&gt;Merge Request templates&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here an example of one of our &lt;a href="https://gitlab.com/mycf.sg/ui-employer/-/tree/master/.gitlab/merge_request_templates"&gt;templates&lt;/a&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- Make sure Merge Request title following "&amp;lt;type&amp;gt;: &amp;lt;subject&amp;gt; [#&amp;lt;pivotal_story_id&amp;gt;]" --&amp;gt;

&amp;lt;!-- Brief description. Examples are, screenshots, steps to reproduce, links to dependent MRs --&amp;gt;

### General Checklist:

- [ ] Exact versions in package.json
- [ ] Testing instructions?
- [ ] Docs? Examples are, update `README.md` file, or add ADR in `doc/adr`
- [ ] Tests?
- [ ] Put in copy at least two potential reviewers

/cc

/label ~"Review Me"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Structure of this template are self-explanatory, one thing I want to point out is last line: &lt;code&gt;/label ~"Review Me"&lt;/code&gt; on which I already wrote an &lt;a href="https://dev.to/mcf/code-review-review-labels-in-gitlab-1ola"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And don't forget to treat checklists as living documentation, new items can be added, or removed throughout time.&lt;/p&gt;

&lt;p&gt;It's purpose is to improve code review safety by ensuring that no important tasks are forgotten.&lt;/p&gt;

</description>
      <category>checklist</category>
      <category>codereview</category>
      <category>markdown</category>
      <category>templates</category>
    </item>
    <item>
      <title>How we use Google Sheets API to Manage Our Notification Banners</title>
      <dc:creator>Joseph Matthias Goh</dc:creator>
      <pubDate>Fri, 24 Apr 2020 10:52:39 +0000</pubDate>
      <link>https://dev.to/mcf/how-we-use-google-sheets-api-to-manage-our-notification-banners-3jn2</link>
      <guid>https://dev.to/mcf/how-we-use-google-sheets-api-to-manage-our-notification-banners-3jn2</guid>
      <description>&lt;p&gt;If you've visited our site before, you'd have noticed that banner at the top of our page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqcx5xadx81grglmeqyq3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqcx5xadx81grglmeqyq3.jpg" alt="The header banner at the top of MyCareersFuture.SG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you might have noticed the inline banner linking to the &lt;strong&gt;#SGUnitedJobs&lt;/strong&gt; virtual career fair site:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1te0zfbflm3tt2tlx54o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1te0zfbflm3tt2tlx54o.jpg" alt="The inline banner in the middle of our landing page at MyCareersFuture.SG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was a time when we didn't have any of these, so what happened?&lt;/p&gt;

&lt;h1&gt;
  
  
  The Problem Domain
&lt;/h1&gt;

&lt;p&gt;The header banner has been in-place for some time now, and it was created due to some needs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We wanted a way to notify users about new features we've released&lt;/li&gt;
&lt;li&gt;We wanted a way to notify users of scheduled downtimes because of systems that we're integrating with&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;More recently, to support whole-of-government efforts to reduce the impact on our workforce, the agency we're servicing - Workforce Singapore - decided to launch an &lt;strong&gt;#SGUnitedJobs&lt;/strong&gt; virtual career fair. And there was a slight issue.&lt;/p&gt;

&lt;p&gt;In the media release, the press were quoted the words "MyCareersFuture". This led to the general public landing on our site instead of the virtual career fair's site, resulting in &lt;strong&gt;a need to redirect users to the correct site&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This meant some updates to our landing page notification/updates elements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The header banner needed to be clickable&lt;/li&gt;
&lt;li&gt;An in-page element needed to be created&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In addition to addressing users' problems, our solution also needed to accomodate use cases by our business elements, and they needed to be able to &lt;strong&gt;enable text-changes to be made without a new system deployment&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Solution
&lt;/h1&gt;

&lt;p&gt;We begin with the end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6d0eto3lh2ex41i32bpn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6d0eto3lh2ex41i32bpn.jpg" alt="An image of the sheet in Google Sheets that our business elements use to configure the banners"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google sheets for us was a nice interface between business and technical elements. Business users could enter in information as it came along without interfering with the code-level development using a spreadsheet, with the spreadsheet being able to run custom-code to generate a JSON output that our system could consume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F21lio8b6veemafmiw56f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F21lio8b6veemafmiw56f.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Wait, can Google sheets do that?"&lt;/em&gt; I hear you ask.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting up Google Sheets as a JSON API
&lt;/h1&gt;

&lt;p&gt;Let's set up a simple Google Sheets that you can use as an API that's similar in nature to what we've done.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In MCF's backend, we consume data from Google Sheets, run validations on it before throwing it back out as a &lt;code&gt;.json&lt;/code&gt; file which we push to our CDNs so that these can be consumed by our users without placing a load on our servers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  1. Create a new spreadsheet
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://drive.google.com" rel="noopener noreferrer"&gt;https://drive.google.com&lt;/a&gt; and create a new spreadsheet. Maybe create a table like:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ID&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Username&lt;/th&gt;
&lt;th&gt;Email&lt;/th&gt;
&lt;th&gt;JSON Output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Joseph&lt;/td&gt;
&lt;td&gt;joseph&lt;/td&gt;
&lt;td&gt;&lt;a href="mailto:j@seph.com"&gt;j@seph.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Matthias&lt;/td&gt;
&lt;td&gt;matthias&lt;/td&gt;
&lt;td&gt;&lt;a href="mailto:m@tthias.com"&gt;m@tthias.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Goh&lt;/td&gt;
&lt;td&gt;goh&lt;/td&gt;
&lt;td&gt;&lt;a href="mailto:g@h.com"&gt;g@h.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;PROTIP: The table above works like copypasta with Google Sheets if you copy it properly&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  2. Publish to the web
&lt;/h2&gt;

&lt;p&gt;Go to the top navigation bar, and access &lt;strong&gt;File&lt;/strong&gt; &amp;gt; &lt;strong&gt;Publish to the web&lt;/strong&gt;. Confirm that &lt;strong&gt;Link&lt;/strong&gt; is selected and select &lt;strong&gt;Sheet1&lt;/strong&gt;. Change the type to &lt;strong&gt;Comma-separated values (.csv)&lt;/strong&gt;. Click the &lt;strong&gt;Publish&lt;/strong&gt; button and say OK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgqpjyegeawvuc67tjc9q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgqpjyegeawvuc67tjc9q.jpg" alt="(Are you sure you want to publish this selection?) Yasss, yass, yas"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A link should be provided to you. Test it out by pasting it into the address bar of your page. A &lt;code&gt;.csv&lt;/code&gt; file should be downloaded. Opening it up should reveal (if you've entered in the information as-if from above):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ID,Name,Username,Email
1,Joseph,joseph,j@seph.com
2,Matthias,matthias,m@tthias.com
3,Goh,goh,g@h.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Making it JSON
&lt;/h2&gt;

&lt;p&gt;We'll be inserting the JSON as values in the empty &lt;strong&gt;JSON Output&lt;/strong&gt; column you've copied above using the Script Editor. In the header navigation menu, go to &lt;strong&gt;Tools* &amp;gt; **Script editor&lt;/strong&gt;. A new &lt;code&gt;Code.gs&lt;/code&gt; should be waiting for you.&lt;/p&gt;

&lt;p&gt;Overwrite the generated code and paste in the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;onEdit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;range&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRow&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;SpreadsheetApp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getActiveSheet&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sheetName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sheetName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Sheet1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createUserJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cell&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;cell&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; 
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createUserJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userJSON&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userJSON&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Google Script is essentially JavaScript. Also, if you've changed your sheet name, you might want to replace the &lt;code&gt;"Sheet1"&lt;/code&gt; in the switch-case branch to the name of your sheet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The above code is triggered on an edit to the sheet. It checks whether the edited sheet is named &lt;strong&gt;Sheet1&lt;/strong&gt; and if it is, generates a JSON string and pastes it into the 5th column of the row being edited. Save it by going to &lt;strong&gt;File&lt;/strong&gt; &amp;gt; &lt;strong&gt;Save&lt;/strong&gt;. The name shouldn't matter.&lt;/p&gt;

&lt;p&gt;Go back to your sheets and edit one of the existing values. On removing your focus from that cell, you should see a JSON value appear in the 5th column.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Consuming the JSON
&lt;/h2&gt;

&lt;p&gt;Retrieve the link from the &lt;strong&gt;Publish to the web&lt;/strong&gt; stage. We're going to use that to retrieve our JSON outputs. Your original link should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm you've done an edit so that the JSON appears and then do a &lt;code&gt;curl&lt;/code&gt; to see what it looks like now. In your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s1"&gt;'https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=csv'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive something similar to the following as your response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ID,Name,Username,Email,JSON Output
1,Joseph,joseph,j@seph.com,"{""id"":1,""name"":""Joseph"",""username"":""joseph"",""password"":""j@seph.com""}"
2,Matthias,matthias,m@tthias.com,
3,Goh,goh,g@h.com,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're almost there!&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Tying It All Up
&lt;/h2&gt;

&lt;p&gt;Now that we can generate the JSON, go ahead and update the other rows so that all of them have a value in the JSON Output column.&lt;/p&gt;

&lt;p&gt;In this step, we'll link all the JSON Output values into a new sheet which we can use as the data source in a service.&lt;/p&gt;

&lt;p&gt;Add a &lt;strong&gt;New Sheet&lt;/strong&gt; by clicking the plus (&lt;code&gt;+&lt;/code&gt;) symbol at the bottom left of the existing spreadsheet. Name it &lt;strong&gt;Output&lt;/strong&gt;. We'll be writing the aggregated JSON Output to row 1 column 1 of this sheet.&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Tools&lt;/strong&gt; &amp;gt; &lt;strong&gt;Script editor&lt;/strong&gt; once again and modify the script there  so that it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;onEdit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;range&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRow&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;SpreadsheetApp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getActiveSheet&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sheetName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sheetName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Sheet1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createUserJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cell&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;cell&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// &amp;gt; diff starts here&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonOutputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;activeSheet&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;E2:E&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getValues&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;aggergatedUsers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nx"&gt;jsonOutputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="nx"&gt;aggergatedUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;outputSheet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;SpreadsheetApp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getActive&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;getSheetByName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Output&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;outputCell&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;outputSheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;outputCell&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;aggergatedUsers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="c1"&gt;// / diff ends here&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createUserJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userJSON&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sheet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;editedRow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;getValue&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userJSON&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we just added basically retrieves the value of all rows in column E which in the sample sheet we created corresponds to the column &lt;strong&gt;JSON Output&lt;/strong&gt;. It then transforms the retrieved values into an array object before converting it back into a string using &lt;code&gt;JSON.stringify&lt;/code&gt;. The &lt;code&gt;null, 2&lt;/code&gt; arguments in the &lt;code&gt;JSON.stringify&lt;/code&gt; call basically indicates to format the JSON in a human readable way (&lt;em&gt;see &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify" rel="noopener noreferrer"&gt;JSON.stringify documentation on MDN&lt;/a&gt; if you're interested in what exactly it does&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;After you've made the change, save the GoogleScripts project, head back to your sheet and go ahead make a modification to one of the columns to trigger an output to the &lt;strong&gt;Output&lt;/strong&gt; sheet in your spreadsheet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhi5u3ohyq35fstwk1gj4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhi5u3ohyq35fstwk1gj4.jpg" alt="The final JSON output in the Output sheet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;File&lt;/strong&gt; &amp;gt; &lt;strong&gt;Publish to the web&lt;/strong&gt; once again, this time selecting the &lt;strong&gt;Output&lt;/strong&gt; sheet and selecting &lt;strong&gt;Tab-separated values (.tsv)&lt;/strong&gt; as the type. In the &lt;strong&gt;Published content and settings&lt;/strong&gt; section, ensure that &lt;strong&gt;Output&lt;/strong&gt; is also being published and that the checkbox with &lt;strong&gt;Automatically republish when changes are made&lt;/strong&gt; is also checked.&lt;/p&gt;

&lt;p&gt;Copy the provided link which look like (sensitive values are masked with &lt;code&gt;XXX&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=tsv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To modify this such that it will always return just the first row and column, append a &lt;code&gt;&amp;amp;range=A1&lt;/code&gt; so that the final link looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=tsv&amp;amp;range=A1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's test this out by running a &lt;code&gt;curl&lt;/code&gt; with it which would be what your Request module would be doing if it calls this URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-vv&lt;/span&gt; &lt;span class="s1"&gt;'https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=tsv&amp;amp;range=A1'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test that it's valid JSON, we can pipe it to a tool called &lt;code&gt;jq&lt;/code&gt; that can help us validate it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-vv&lt;/span&gt; &lt;span class="s1"&gt;'https://docs.google.com/spreadsheets/d/e/2PACX-XXX/pub?gid=YYY&amp;amp;single=true&amp;amp;output=tsv&amp;amp;range=A1'&lt;/span&gt; | jq &lt;span class="s1"&gt;'.'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're done. Aren't we?&lt;/p&gt;

&lt;h2&gt;
  
  
  6. What if something goes wrong?
&lt;/h2&gt;

&lt;p&gt;As with all code, things can and will go wrong during development, but the awesome news is that Google provides us with a page where we can view errors in our script and this can be found on the &lt;code&gt;Code.gs&lt;/code&gt; page through the file navigation menu via &lt;strong&gt;View&lt;/strong&gt; &amp;gt; &lt;strong&gt;Executions&lt;/strong&gt; which should open the script's Google Apps Script dashboard.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that only calls to &lt;code&gt;console.error&lt;/code&gt; go through to this dashboard.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Looks like we're done for real!&lt;/p&gt;




&lt;p&gt;If you like what you just read, don't forget to leave some reactions/comments so we know this has been interesting for you- and do consider following us for more insights into tools we use and our development processes.&lt;/p&gt;

&lt;p&gt;Cheers and till next time!&lt;/p&gt;

</description>
      <category>api</category>
      <category>googlesheets</category>
      <category>business</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Code Review: Review labels in GitLab</title>
      <dc:creator>Andrey Bodoev</dc:creator>
      <pubDate>Sun, 19 Apr 2020 12:27:24 +0000</pubDate>
      <link>https://dev.to/mcf/code-review-review-labels-in-gitlab-1ola</link>
      <guid>https://dev.to/mcf/code-review-review-labels-in-gitlab-1ola</guid>
      <description>&lt;p&gt;We use GitLab to host our code. One of the aspects of it is deal with code reviews on daily basis. &lt;/p&gt;

&lt;p&gt;In our team, we're about 10+ devs, it's quite difficult to have someone attention to code review, if it's product team and most of the time you focus on your own work and hoping if someone is review your changes. &lt;/p&gt;

&lt;p&gt;Initially, we have this, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S5TFDES3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/78qj4p0y5yilc2icz81g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S5TFDES3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/78qj4p0y5yilc2icz81g.png" alt="GitLab without labels"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's an issue, which is no way to tell if someone is reviewing your code, or if you have enough reviewers and etc. Number of comments are not gave you much, and &lt;code&gt;0 of 2&lt;/code&gt; is not really helpful here.&lt;/p&gt;




&lt;p&gt;In our code review guidelines we have few rules, one of them is to have your Merge Request (abbr. MR) to be reviewed by at least 2 person in team.&lt;/p&gt;

&lt;p&gt;There's few concerns to address,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to make it visible if someone is reviewing your MR;&lt;/li&gt;
&lt;li&gt;who reviewing what, and have they done so;&lt;/li&gt;
&lt;li&gt;do you have enough reviewers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gitlab has it's &lt;a href="https://docs.gitlab.com/ee/user/project/merge_requests/merge_request_approvals.html"&gt;own approval system&lt;/a&gt; in place, and code review guidelines. It does work, but we decided to tweak this a bit to improve communication within a team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---5YMbHX9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/exljqoxdyanzc8qamo3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---5YMbHX9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/exljqoxdyanzc8qamo3z.png" alt="GitLab with labels"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's few types of labels we introduces,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Review Me&lt;/code&gt;, label is used to have an attention to MR and stating that it's ready to be reviewed, otherwise it can be &lt;code&gt;wip&lt;/code&gt;;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Review by Paul Atreides&lt;/code&gt;, each reviewer has it's own label assigned once they start commenting;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Approved by Padishah Emperor Shaddam IV&lt;/code&gt;, each reviewer has it's own label once review is done, and comments are addressed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this in practice, once you going to visit list of MRs waiting. You're not checking each of them in order to find out, if it's been reviewed by someone else and it's ready to be merged. And it's greatly improve visibility of state on each Merge Request.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>codereview</category>
      <category>labels</category>
    </item>
    <item>
      <title>Code Review: Name file after exposed function</title>
      <dc:creator>Andrey Bodoev</dc:creator>
      <pubDate>Fri, 10 Apr 2020 11:15:24 +0000</pubDate>
      <link>https://dev.to/mcf/name-file-after-exposed-function-5de1</link>
      <guid>https://dev.to/mcf/name-file-after-exposed-function-5de1</guid>
      <description>&lt;p&gt;In code reviews which we conduct in our team, I might find some examples of code, which is can be misleading or with no clear intent. This post is about to share reasoning and hear critique on some of these examples. &lt;/p&gt;

&lt;h1&gt;
  
  
  Name file after exposed function
&lt;/h1&gt;

&lt;p&gt;One of the cases is having few functions in one file, related or maybe not, or can be completely not related, but somehow group in one file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;get/
  index.ts
    export getSomething :: Int -&amp;gt; Int
    export getSomethingElse :: String -&amp;gt; String
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Few things we can tell immediately, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;please don't use &lt;code&gt;index.js&lt;/code&gt;, &lt;a href="https://www.youtube.com/watch?v=M3BM9TB-8yA&amp;amp;vl=en"&gt;https://www.youtube.com/watch?v=M3BM9TB-8yA&amp;amp;vl=en&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;both &lt;code&gt;get&lt;/code&gt; directory and &lt;code&gt;index.ts&lt;/code&gt; gave no context whatsoever.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, try to name files after exposed function,  and move each of them into it's own file, like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;getSomething.ts
  export getSomething :: Int -&amp;gt; Int

getSomethingElse.ts
  export getSomethingElse :: String -&amp;gt; String
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It shows intent by looking at filename. &lt;/p&gt;

&lt;p&gt;Few side effects which you may discover later,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Functions are enclosed in file to ensure there's no shared variables; which leads to proper unit tests and maintainability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And after this change, there's powerful way to quickly go through project structure; by opening files which named after function and not searching through functions in search output of your text editor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Name file after exposed function;&lt;/li&gt;
&lt;li&gt;Breaking up into small modules for unit testing and maintainability; &lt;/li&gt;
&lt;li&gt;Browsing code within file vs. browsing code within project. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the great examples in wild, &lt;a href="https://github.com/lodash/lodash/tree/master/"&gt;https://github.com/lodash/lodash/tree/master/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>refactorit</category>
      <category>javascript</category>
      <category>codereview</category>
    </item>
  </channel>
</rss>
