<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zafri Zulkipli</title>
    <description>The latest articles on DEV Community by Zafri Zulkipli (@zaffja).</description>
    <link>https://dev.to/zaffja</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zaffja"/>
    <language>en</language>
    <item>
      <title>Parallel Behat</title>
      <dc:creator>Zafri Zulkipli</dc:creator>
      <pubDate>Mon, 17 Jan 2022 03:41:08 +0000</pubDate>
      <link>https://dev.to/zaffja/parallel-behat-28e7</link>
      <guid>https://dev.to/zaffja/parallel-behat-28e7</guid>
      <description>&lt;p&gt;TLDR; Our company dockerized pretty much everything, as such to tackle long waiting time of the CI pipeline, we split the big list of &lt;a href="https://docs.behat.org/en/latest/"&gt;behat&lt;/a&gt; files into chunks and process it using docker in parallel using python multiprocessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background:
&lt;/h2&gt;

&lt;p&gt;In my workplace, we deploy on a daily basis. Each time we want to deploy, we have to wait for at least 20 mins for the pipeline to finish. On average it takes around 25~30 mins from the time we push the code to our version control, wait for it to finish running all the checks, then deploy to production.&lt;/p&gt;

&lt;p&gt;The main reason for this delay is we have a lot of behat &lt;code&gt;.feature&lt;/code&gt; files. We emphasize a lot on integration testing, as such the test files keep growing.&lt;/p&gt;

&lt;p&gt;All these waiting is frustrating whenever we want something in production very fast. It's even more frustrating when we're working with 3rd parties and they have to wait for us to apply changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The search for cutting down time
&lt;/h2&gt;

&lt;p&gt;We've search for various ways to reduce the time, some of the suggestion was:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Remove deprecated features and its &lt;code&gt;.feature&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Refactor the &lt;code&gt;.feature&lt;/code&gt; file to remove redundancy&lt;/li&gt;
&lt;li&gt;Search for 3rd party packages that deal with parallelism&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Point 1: we've been regularly doing it, but the time it reduce was not as much as we expected.&lt;/p&gt;

&lt;p&gt;Point 2: no one really wants to take the time to find redundant feature files and refactor it.&lt;/p&gt;

&lt;p&gt;Point 3: 3rd party packages are unable to to satisfy all our requirements.&lt;/p&gt;

&lt;p&gt;After all things considered, we've decided to create our own parallel behat runner.&lt;/p&gt;

&lt;h2&gt;
  
  
  The journey to achieve parallelism
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: How do we even run process in parallel?
&lt;/h3&gt;

&lt;p&gt;TLDR; I don't even know myself the intricate logic behind parallel processing. What I do know is that I've been using python &lt;code&gt;multiprocessing&lt;/code&gt; package and it worked wonders!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add a script that takes specific input to  execute
&lt;/h3&gt;

&lt;p&gt;Python &lt;code&gt;multiprocessing&lt;/code&gt; needs a process to run. So I've created a simple bash script that takes in &lt;code&gt;.feature&lt;/code&gt; folder path and execute all the files within it. It would basically be self-contained docker process that executes the test and die.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Chunk it!
&lt;/h3&gt;

&lt;p&gt;This part is easy, let say we have &lt;code&gt;.feature&lt;/code&gt; folders A,B,C,D,E,F. By putting it into chunks we would have &lt;code&gt;[[A,B,C], [D,E,F]]&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Putting it all together
&lt;/h3&gt;

&lt;p&gt;Finally, we use python &lt;code&gt;multiprocessing&lt;/code&gt; to run the &lt;code&gt;script&lt;/code&gt; which takes in the &lt;code&gt;chunk&lt;/code&gt;. By splitting into 2 sub process, we managed to get the time down from 20 mins to 14 mins. We're still in testing phase on increasing the no. of sub processes. The hypothesis is, more sub process, less execution time provided we have enough CPU core.&lt;/p&gt;

&lt;h2&gt;
  
  
  The code
&lt;/h2&gt;

&lt;p&gt;For company privacy reason, I will not be sharing the exact code. This is the simplified obfuscated version, much of the things may not make sense to you or edge cases not handled but it's already handled in our actual version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sub_process_test.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def main():
    processes = []
    sub_process_count = 2
    chunks = get_chunks(sub_process_count)

    for index, chunk in enumerate(chunks):
        current_proccess = multiprocessing.Process(
            target=run_behat_for_certain_folders,
            args=[index+1, ','.join(chunk)]
        )
        current_proccess.start()
        processes.append(current_proccess)

    [x.join() for x in processes]

def run_behat_for_certain_folders(pid, folder_names):
    if folder_names:
        subprocess.call(f"./sub_process_runner.sh {folder_names} {pid}", shell=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;sub_process_runner.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

run_sub_process() {
  docker-compose -f $docker_compose_path -p "$prefix" exec -T test ./vendor/bin/behat \
    --stop-on-failure \
    ./your-behat-folder/$1)

  # status check here, obfuscated
}

# docker initialization here, obfuscated

for i in ${1//,/ }
do
    run_sub_process $i $2
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;sub_process_test.py&lt;/code&gt; will call &lt;code&gt;sub_process_runner.sh A,B,C 1&lt;/code&gt;where the 1st arg is the list of folder it needs to run and the 2nd arg is the sub process id.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;We achieve our goal of reducing the CI pipeline waiting time. Hopefully this post bring some insights to the reader, whatever the insight may be.&lt;/p&gt;

</description>
      <category>behat</category>
      <category>docker</category>
      <category>cicd</category>
      <category>python</category>
    </item>
    <item>
      <title>Automated API Testing with Jenkins and Postman/Newman</title>
      <dc:creator>Zafri Zulkipli</dc:creator>
      <pubDate>Sun, 19 May 2019 07:43:56 +0000</pubDate>
      <link>https://dev.to/zaffja/automated-api-testing-with-jenkins-and-postman-newman-29go</link>
      <guid>https://dev.to/zaffja/automated-api-testing-with-jenkins-and-postman-newman-29go</guid>
      <description>&lt;p&gt;Let me start off by saying that this is purely experimental, I am not using this strategy in my workplace nor any production environment. This is me, spending my lazy Sunday afternoon playing around with API testing automation.&lt;/p&gt;

&lt;p&gt;I got this idea to play around with Jenkins/Postman after my workplace is looking for an API testing automation tool that would run periodically and notify us immediately when something fails.&lt;/p&gt;

&lt;p&gt;I knew &lt;a href="https://www.getpostman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt; is perfect for running multiple API tests with it's &lt;a href="https://learning.getpostman.com/docs/postman/collection_runs/starting_a_collection_run/" rel="noopener noreferrer"&gt;Runner&lt;/a&gt; and pair it with &lt;a href="https://www.npmjs.com/package/newman" rel="noopener noreferrer"&gt;Newman&lt;/a&gt; I can execute collections in headless state.  &lt;/p&gt;

&lt;p&gt;I stumbled upon Jenkins while googling away how to automate the process of running Postman collections. So now I have the means to test and automate. Below is my step-by-step setup for running Postman + Newman + Jenkins.&lt;/p&gt;

&lt;p&gt;My folder structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;automated-test
├── docker-compose.yml
├── Dockerfile
├── postman_collections
├── .. # some other Jenkins related stuff that is not in the scope of this post

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I begin by extending the official Jenkins docker image to include Newman.&lt;/p&gt;

&lt;p&gt;Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM jenkins/jenkins:lts

USER root

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update \
    # install nodejs
    &amp;amp;&amp;amp; curl -sL https://deb.nodesource.com/setup_10.x | bash - \
    &amp;amp;&amp;amp; apt-get install -y apt-utils \
    &amp;amp;&amp;amp; apt-get install -y nodejs \
    &amp;amp;&amp;amp; apt-get install -y build-essential \
    &amp;amp;&amp;amp; apt-get install -y inotify-tools \
    # install newman
    &amp;amp;&amp;amp; npm install -g newman

USER jenkins

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then build using &lt;code&gt;docker-compose&lt;/code&gt; (since I'm lazy to type a long list of docker command just to start Jenkin server)&lt;/p&gt;

&lt;p&gt;docker-compose.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"

services:
  app:
    build: .
    volumes:
      - .:/var/jenkins_home
    ports:
      - "8080:8080"
      - "50000:50000"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, a simple &lt;code&gt;docker-compose up&lt;/code&gt; will suffice.&lt;/p&gt;

&lt;p&gt;So I got my Jenkins server up and running, now I need to export my postman collections. Below is a simple API test that I made. It calls an endpoint and assert the HTTP response status is 200, pretty simple right?&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fk4vz7i9stwqlzm2fy4cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fk4vz7i9stwqlzm2fy4cc.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I exported the collection to &lt;code&gt;~/automated-test/postman_collections/collection.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, I need to create a new project in &lt;code&gt;Jenkins&lt;/code&gt; dashboard. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flc4d1txlhejxe3h13aw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Flc4d1txlhejxe3h13aw2.png"&gt;&lt;/a&gt;&lt;br&gt;
The configs worth noting is&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Interval to run the test (I set it to run every 5min)&lt;br&gt;
&lt;code&gt;Build Triggers &amp;gt; Build periodically &amp;gt; */5 * * * *&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Actual test execution for the postman collections&lt;br&gt;
&lt;code&gt;Build &amp;gt; Add build step &amp;gt; Execute shell &amp;gt; newman run /var/jenkins_home/postman_collections/collection.json&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once saved, Jenkins will execute your postman collections every 5min.&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fte25b4lrh0msq3bnvj56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fte25b4lrh0msq3bnvj56.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And done! Of course using Jenkins just to test for Postman API collections is a bit overkill, but it does the job. If you have any other automation tool that you find suitable, do let me know since my workplace is still looking for a suitable API testing automation.&lt;/p&gt;

</description>
      <category>postman</category>
      <category>jenkins</category>
      <category>automation</category>
    </item>
    <item>
      <title>One of those wtf moments when using docker</title>
      <dc:creator>Zafri Zulkipli</dc:creator>
      <pubDate>Sat, 11 May 2019 10:50:26 +0000</pubDate>
      <link>https://dev.to/zaffja/one-of-those-wtf-moments-when-using-docker-472</link>
      <guid>https://dev.to/zaffja/one-of-those-wtf-moments-when-using-docker-472</guid>
      <description>&lt;p&gt;I've been using docker for a year and a half now. Since then I've learned many neat and cool tricks about docker. I'm gonna share with you one particular trick that I find very interesting when using docker. Take a look at below script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it -v $(PWD):/app -w /app busybox rm -rf deps
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At first glance you can tell that I'm using &lt;code&gt;busybox&lt;/code&gt; image to do nothing than just to remove my &lt;code&gt;deps&lt;/code&gt; folder. But why should we concern about this? Well, it turns out that docker leverages root access, meaning we're basically running &lt;code&gt;sudo rm -rf deps&lt;/code&gt; without asking for our sudo password! That's dangerous!!!&lt;/p&gt;

&lt;p&gt;Although it is dangerous, it is quite useful as well. We just have to be careful on the way we use it that's all. Tbh, I've used this trick quite a lot in my development. The example above is actually a part of my &lt;code&gt;Makefile&lt;/code&gt; setup as depict below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;setup:
        docker run --rm -it -v $(PWD):/app -w /app busybox rm -rf deps
        docker run --rm -it -v $(PWD):/app -w /app elixir:1.6 mix local.hex --force &amp;amp;&amp;amp; mix deps.get
        cd assets &amp;amp;&amp;amp; $(MAKE) setup
        docker-compose build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, I'm developing an elixir application, and &lt;code&gt;make setup&lt;/code&gt; is something you want to run quite a few times (if not just one time). So this ensure that if somebody were to clone my project, running &lt;code&gt;make setup&lt;/code&gt; for them would be a breeze and won't have any issues regarding permission.&lt;/p&gt;

&lt;p&gt;Anyways, what do you think of this trick? Is it good? Bad? Share your thoughts with me and if possible how can I improve my setup.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>permissions</category>
    </item>
  </channel>
</rss>
