<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alugbin Abiodun Olutola</title>
    <description>The latest articles on DEV Community by Alugbin Abiodun Olutola (@lordrahl90).</description>
    <link>https://dev.to/lordrahl90</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lordrahl90"/>
    <language>en</language>
    <item>
      <title>My Simple Github Actions CI/CD Pipeline:</title>
      <dc:creator>Alugbin Abiodun Olutola</dc:creator>
      <pubDate>Wed, 08 Jun 2022 00:00:52 +0000</pubDate>
      <link>https://dev.to/lordrahl90/my-simple-github-actions-cicd-pipeline-nmd</link>
      <guid>https://dev.to/lordrahl90/my-simple-github-actions-cicd-pipeline-nmd</guid>
      <description>&lt;p&gt;A little story of how I combined 2 repositories with github actions' &lt;code&gt;repository_dispatch&lt;/code&gt; webhook to manage my deployment pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background.
&lt;/h2&gt;

&lt;p&gt;I have always wanted to deploy a new image to kubernetes once my build is build is successful, but I always need to update the deployment file to achieve this.&lt;/p&gt;

&lt;p&gt;I started using &lt;a href="https://argo-cd.readthedocs.io/en/stable/#:~:text=Argo%20CD%20is%20implemented%20as,target%20state%20is%20considered%20OutOfSync%20."&gt;ArgoCD&lt;/a&gt; recently, and I enjoy the simplicity and the insights I get from the UI. Only problem was that I need to initialize it from my github repo. My repository is private, got some challenges with that, but that's for another time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge
&lt;/h2&gt;

&lt;p&gt;The challenge here is that I want my deployment repository to be separate from my application (app) repository (repo). Also, because I am using branches in ArgoCD, I don't get to run my tests or build my image before the changes get picked up by ArgoCD. For these reasons, it makes sense to separate the repos and find a way to notify the deployment repo once the app repo has completed all the checks and the docker image is built, we can then proceed to communicate with the deployment repo, sending the new image tag over. The deployment repo then updates the image in kubernetes deployment file, checks out the code and pushes. ArgoCD picks up the new change and syncs automagically.&lt;br&gt;
Enough Talks, here is the actions code.&lt;/p&gt;
&lt;h2&gt;
  
  
  Application
&lt;/h2&gt;

&lt;p&gt;Application was written in go, but this can be swapped out for other services as well.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app_repo/.github/workflows/ci.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Test and Build

on:
  push:
    branches: [ master, develop, staging, testing ]
  pull_request:
    branches: [ master ]
jobs:
  lint:
    name: Linting
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Lint and Vet
        uses: golangci/golangci-lint-action@v2
        with:
          version: latest
          args: --timeout=3m
  test:
    name: Test
    runs-on: ubuntu-18.04
    services:
      mysql:
        image: mysql:5.7
        env:
          MYSQL_DATABASE: app_name
          MYSQL_USER: cicd-user
          MYSQL_PASSWORD: password
          MYSQL_ROOT_PASSWORD: password
        ports:
          - 3306:3306
    steps:
      - uses: actions/checkout@v2

      - name: Set up Go
        uses: actions/setup-go@v2
        with:
          go-version: 1.17

      - name: Test
        run: go test ./...
        env:
          DB_HOST: 127.0.0.1
          DB_PORT: 3306
          ENVIRONMENT: ci-cd
          DB_USERNAME: cicd-user
          DB_PASSWORD: password
          DATABASE: app_name
          TOKEN_STRING: "hello hello world"

  build:
    name: Build Image
    runs-on: ubuntu-latest
    needs:
      - test
      - lint
    steps:
      - name: Extract branch name
        shell: bash
        run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
        id: extract_branch

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Login to Dockerhub
        uses: docker/login-action@v1
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Build and Push
        uses: docker/build-push-action@v2
        with:
          file: .docker/Dockerfile
          push: true
          tags: &amp;lt;&amp;lt;docker_repo&amp;gt;&amp;gt;/&amp;lt;&amp;lt;image_name&amp;gt;&amp;gt;:${{ steps.extract_branch.outputs.branch }}-${{ github.run_id }}-${{ github.run_number }}


  deploy:
    name: Trigger New Deployment
    runs-on: ubuntu-latest
    needs:
      - build
    steps:
      - name: Extract branch name
        shell: bash
        run: echo "##[set-output name=branch;]$(echo ${GITHUB_REF#refs/heads/})"
        id: extract_branch

      - name: Deployment
        uses: mvasigh/dispatch-action@main
        with:
          token: ${{ secrets.G_ACCESS_TOKEN }}
          repo: &amp;lt;&amp;lt;deploy_repo&amp;gt;&amp;gt;
          owner: &amp;lt;&amp;lt;owner&amp;gt;&amp;gt;
          event_type: update_image
          message: |
            {
              "branch": "${{ steps.extract_branch.outputs.branch }}",
              "tag": "${{ steps.extract_branch.outputs.branch }}-${{ github.run_id }}-${{ github.run_number }}"
            }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment.
&lt;/h2&gt;

&lt;p&gt;Here, I keep a folder for each environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;production&lt;/li&gt;
&lt;li&gt;staging&lt;/li&gt;
&lt;li&gt;testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is a stubs folder called &lt;code&gt;stubs&lt;/code&gt; with this sample file for each environment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./stubs/environment.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: &amp;lt;&amp;lt;name&amp;gt;&amp;gt;
  namespace: &amp;lt;&amp;lt;environment&amp;gt;&amp;gt;
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: backend
      app: &amp;lt;&amp;lt;app_name&amp;gt;&amp;gt;
  template:
    metadata:
      labels:
        tier: backend
        app: &amp;lt;&amp;lt;app_name&amp;gt;&amp;gt;
    spec:
      containers:
        - name: &amp;lt;&amp;lt;app_name&amp;gt;&amp;gt;
          image: &amp;lt;&amp;lt;docker_repo&amp;gt;&amp;gt;/&amp;lt;&amp;lt;image&amp;gt;&amp;gt;:#TAG#
          imagePullPolicy: Always
          envFrom:
            - configMapRef:
                name: &amp;lt;&amp;lt;app_config&amp;gt;&amp;gt;
            - secretRef:
                name: &amp;lt;&amp;lt;app_secret&amp;gt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PS: all data between &lt;code&gt;&amp;lt;&amp;lt;&lt;/code&gt; and &lt;code&gt;&amp;gt;&amp;gt;&lt;/code&gt; should be swapped with their actual value. Only &lt;code&gt;#TAG&lt;/code&gt; should be the placeholder.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;deploy_repo/.github/workflows/deploy.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy
on:
  repository_dispatch:
    branches: ["master"]
    types: [update_image]

jobs:
  build:
    name: Run API
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3
        with:
          ref: ${{github.event.client_payload.message.branch}}

      - name: Update Stub With New Tag
        shell: bash
        run: |
          cp stubs/${{ github.event.client_payload.message.branch }}.yaml ${{ github.event.client_payload.message.branch }}/deployment.yaml.stub
          sed -i "s/#TAG#/${{ github.event.client_payload.message.tag }}/g" ${{ github.event.client_payload.message.branch }}/deployment.yaml.stub
          cp ${{github.event.client_payload.message.branch}}/deployment.yaml.stub ${{github.event.client_payload.message.branch}}/deployment.yaml

      - name: Commit files
        run: |
          git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git config --local user.name "github-actions[bot]"
          git commit -m "Update Tag ${{ github.event.client_payload.message.tag }}" -a

      - name: Push changes
        uses: ad-m/github-push-action@master
        with:
          github_token: ${{ secrets.G_ACCESS_TOKEN }}
          branch: ${{ github.event.client_payload.message.branch }}        

      - name: All Done
        run: |
          echo "All is Well"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a breakdown of the deploy.yml action:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks out the repo using the provided branch&lt;/li&gt;
&lt;li&gt;Copies the environment stub into a &lt;code&gt;deployment.yaml.stub&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Updates the &lt;code&gt;#TAG#&lt;/code&gt; placeholder with the new tag from the &lt;code&gt;repository_dispatch&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit the files using the bot email and using the latest tag as the commit message. This allows us to compare the sync in argocd&lt;/li&gt;
&lt;li&gt;Push the changes using &lt;code&gt;ad-m/github-push-action&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Congratulatory message from me :)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please if there is an easier way to achieve this, please share in the comment. I will love to learn how to make my life easier.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>argocd</category>
      <category>cicd</category>
      <category>go</category>
    </item>
    <item>
      <title>Creating an NFS Server with Vagrant and Archlinux for Kubernetes Cluster</title>
      <dc:creator>Alugbin Abiodun Olutola</dc:creator>
      <pubDate>Mon, 06 Apr 2020 02:29:18 +0000</pubDate>
      <link>https://dev.to/lordrahl90/creating-an-nfs-server-with-vagrant-1oke</link>
      <guid>https://dev.to/lordrahl90/creating-an-nfs-server-with-vagrant-1oke</guid>
      <description>&lt;p&gt;Disclaimer: This post is originally meant to house my process for creating an NFS server (it was a huge toil). As such a lot of things might be omitted or assumed. If you find it useful and you need more insight or questions, please feel free to drop a comment and ask your question.&lt;/p&gt;

&lt;p&gt;Also, there are several cloud providers for storage online. Almost if not all major cloud providers have the service available. However in the spirit of curiosity and how is it done, I decided to create mine.&lt;br&gt;
(Also, it's a pet project and might not even see the light of the day so why waste $)&lt;/p&gt;

&lt;p&gt;That said, let's move on.&lt;/p&gt;

&lt;p&gt;While learning kubernetes recently from &lt;a href="https://app.pluralsight.com/library/courses/kubernetes-developers-core-concepts/table-of-contents"&gt;here&lt;/a&gt;, I began undertaking the task of migrating one of my laravel pet projects to kubernetes.&lt;/p&gt;

&lt;p&gt;Everything was fine, until I got to the point where I needed to persist some of my essential files (logs,database,images etc).&lt;/p&gt;

&lt;p&gt;First, I provision my k8s cluster using vagrant with the knowledge I learnt from &lt;a href="https://www.youtube.com/watch?v=wPdIBeWJJsg&amp;amp;list=PL34sAs7_26wODP4j6owN-36Vg-KbACgkT&amp;amp;index=2"&gt;here&lt;/a&gt;. Then I added an extra machine to the Vagrantfile to provision an archlinux box as well.&lt;/p&gt;

&lt;p&gt;Here is the NFS server box:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#filename=Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |cf|
  # NFS Server
  cf.vm.define "nfs-server" do |nfs|
    nfs.vm.box = "archlinux/archlinux"
    nfs.vm.hostname = "nfs-server.example.com"
    nfs.vm.network "private_network", ip: "172.42.42.99"
    nfs.vm.provider "virtualbox" do |n|
      n.name = "nfs-server"
      n.memory = 1024
      n.cpus = 1
    end
    nfs.vm.provision "shell",path: "bootstrap_nfs.sh"
  end
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This box defines a machine named &lt;code&gt;nfs-server&lt;/code&gt; with the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OS: &lt;code&gt;archlinux&lt;/code&gt; official image from vagrant box collection&lt;/li&gt;
&lt;li&gt;Hostname: "nfs-server.example.com", This machine can be reached by this name within the network as well.&lt;/li&gt;
&lt;li&gt;IPAddress: The entire cluster runs on a private address, and since I will like this to be available to the cluster alone, I add it to the same address space.&lt;/li&gt;
&lt;li&gt;Provider: I use virtualbox, various options exists as defined in &lt;a href="https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-provider"&gt;official vagrant documentation&lt;/a&gt;. Various providers are listed &lt;a href="https://www.vagrantup.com/docs/providers/"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Name: uniquely identify the box with the list of machines (I have a lot of them for my k8s cluster)&lt;/li&gt;
&lt;li&gt;Memory: Assigned memory to this machine. 1Gb in this case&lt;/li&gt;
&lt;li&gt;cpus: Just 1, it can be increased for more performance though.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;bootstrap_nfs.sh&lt;/code&gt; is the script that is used to provision the machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#filename=bootstrap_nfs.sh

# Update hosts file
echo "[TASK 1] Update /etc/hosts file"
cat &amp;gt;&amp;gt;/etc/hosts&amp;lt;&amp;lt;EOF
172.42.42.99 nfs-server.example.com nfs-server
172.42.42.100 lmaster.example.com lmaster
172.42.42.101 lworker1.example.com lworker1
172.42.42.102 lworker2.example.com lworker2
EOF

echo "[TASK 2] Download and install NFS server"
yes| sudo pacman -S nfs-utils

echo "[TASK 3] Create a kubedata directory"
mkdir -p /srv/nfs/kubedata
mkdir -p /srv/nfs/kubedata/db
mkdir -p /srv/nfs/kubedata/storage
mkdir -p /srv/nfs/kubedata/logs

echo "[TASK 4] Update the shared folder access"
chmod -R 777 /srv/nfs/kubedata

echo "[TASK 5] Make the kubedata directory available on the network"
cat &amp;gt;&amp;gt;/etc/exports&amp;lt;&amp;lt;EOF
/srv/nfs/kubedata    *(rw,sync,no_subtree_check,no_root_squash)
EOF

echo "[TASK 6] Export the updates"
sudo exportfs -rav

echo "[TASK 7] Enable NFS Server"
sudo systemctl enable nfs-server

echo "[TASK 8] Start NFS Server"
sudo systemctl start nfs-server

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;I added the Task number with a description to show what is being done and also as a documentation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The hosts needs to be updated so that the nfs server can reach the cluster nodes. Each node will also have the corresponding entry, it's just a way of finding each other within the private network.&lt;/p&gt;

&lt;p&gt;On to my PersistentVolume definition on the cluster,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#filename=storage_volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: app
  name: app-storage-pv
  labels:
    tier: storage
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  storageClassName: app-storage-pv
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: "/srv/nfs/kubedata/storage"
    server: nfs-server.example.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-storage-pvc
  namespace: app
  labels:
    tier: storage
    app: app-storage
spec:
  storageClassName: app-storage-pv
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I created a PV that maps to the exposed folder &lt;code&gt;/srv/nfs/kubedata/storage&lt;/code&gt; and using the FQDN within the network, I can use &lt;code&gt;nfs-server.example.com&lt;/code&gt; for my server name (please ping this from one of your nodes to ensure it is reachable).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#filename=logs_volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: app
  name: app-logs-pv
  labels:
    tier: storage
    app: app-logs
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  storageClassName: app-logs-pv
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: "/srv/nfs/kubedata/logs"
    server: nfs-server.example.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-logs-pvc
  namespace: app
  labels:
    tier: storage
    app: app-logs
spec:
  storageClassName: app-logs-pv
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This creates a PersistentVolume mapping to path &lt;code&gt;/srv/nfs/kubedata/logs&lt;/code&gt; on our NFS server using the same &lt;code&gt;server&lt;/code&gt; details &lt;code&gt;nfs-server.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now comes our deployment file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#filename=deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-backend
  namespace: app
  labels:
    app: app-backend
    tier: backend

spec:
  selector:
    matchLabels:
      app: app-backend
  template:
    metadata:
      labels:
        app: app-backend
        tier: backend

    spec:
      containers:
        - name: app-backend
          image: image_name
          env:
            - name: DB_PASSWORD
              value: "password"
          ports:
            - containerPort: 80
          livenessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 15
          readinessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 10
          volumeMounts:
            - mountPath: /app/storage/app
              name: quest-storage
            - mountPath: /app/storage/logs
              name: app-logs

      volumes:
        - name: app-storage
          persistentVolumeClaim:
            claimName: app-storage-pvc
        - name: app-logs
          persistentVolumeClaim:
            claimName: app-logs-pvc
---

apiVersion: v1
kind: Service
metadata:
  name: app-backend
  namespace: app
  labels:
    tier: backend
spec:
  selector:
    app: app-backend
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In Laravel, the storage folder is mostly used to house file uploads and other generated files. &lt;a href="https://laravel.com/docs/7.x/filesystem"&gt;official docs&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Given pods are &lt;code&gt;mortal&lt;/code&gt; you don't want to lose customer's profile pictures , image uploads or any other essential file when your pod(s) go down (they do that often), we mount the nfs &lt;code&gt;/srv/kubedata/storage/&lt;/code&gt; of the nfs server to Laravel &lt;code&gt;storage/app&lt;/code&gt; on our pod.&lt;/p&gt;

&lt;p&gt;Application logs are also super important, Laravel uses monolog and you don't want your log file to go down with each pod hence we also mount &lt;code&gt;/srv/nfs/kubedata/logs&lt;/code&gt; to Laravel's &lt;code&gt;storage/logs/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once this mount is successful, you can always be sure that even if your cluster goes down, as long as your NFS server is still up and running, your files will be safe and you can bring your cluster back to life without changing a thing.&lt;/p&gt;

&lt;p&gt;PS: There might be a better way to mount the entire storage folder, but I didn't want to add some files that are not needed eg, the &lt;code&gt;view cache&lt;/code&gt; files. This might be useful for some other people though; and it might make sense to add them to the mount.&lt;/p&gt;

&lt;p&gt;Again, there are various cloud provider options that can be used and kubernetes supports most of them, this is just to get my pet project up and running in kubernetes and to also add another &lt;em&gt;feather to my cap&lt;/em&gt; during this lockdown.&lt;/p&gt;

&lt;p&gt;Go and explore!!!&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>vagrant</category>
      <category>archlinux</category>
      <category>sre</category>
    </item>
  </channel>
</rss>
