<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zuri Hunter</title>
    <description>The latest articles on DEV Community by Zuri Hunter (@zurihunter).</description>
    <link>https://dev.to/zurihunter</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zurihunter"/>
    <language>en</language>
    <item>
      <title>How I Built my first Twitch Bot with Natural Language Processing</title>
      <dc:creator>Zuri Hunter</dc:creator>
      <pubDate>Fri, 08 Oct 2021 20:27:17 +0000</pubDate>
      <link>https://dev.to/zurihunter/how-i-built-my-first-twitch-bot-with-nlu-fid</link>
      <guid>https://dev.to/zurihunter/how-i-built-my-first-twitch-bot-with-nlu-fid</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Over the past couple months I have been experimenting with different areas of Machine Learning and Artificial Intelligence. Recently I tried building a &lt;a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="noopener noreferrer"&gt;Convolutional Neural Network&lt;/a&gt; that would have the ability to identify hair types from images. To learn more about my experiment you can check out my video &lt;a href="https://www.youtube.com/watch?v=LtqaumBsuPM" rel="noopener noreferrer"&gt;here&lt;/a&gt;.  Despite not having a successful outcome exploring Convolutional Neural Networks, I was still inspired to learn more about AI/ML. This led me to learn about a subset in the space called Natural Language Understanding, specifically "Question and Answer." In my new project I decided to build a Twitch Bot that will answer questions about the three black women who are trailblazers within the AI/ML industry, &lt;strong&gt;Timnit Gebru&lt;/strong&gt;, &lt;strong&gt;Rediet Abebe&lt;/strong&gt;, and &lt;strong&gt;Joy Buolamwini&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Natural Language Understanding?
&lt;/h2&gt;

&lt;p&gt;In the text world for Machine Learning and Artificial Intelligence there are two subtopics: Natural Language Processing and Natural Language Understanding. &lt;strong&gt;Natural Language Processing&lt;/strong&gt; is when I teach the machine to extract, categorize and break down sentence structures. For example the sentence "The quick brown fox jumps over the lazy dog." Using a method called “&lt;strong&gt;&lt;em&gt;name entity recognition&lt;/em&gt;&lt;/strong&gt;” the machine would identify “fox” and “dogs” as “&lt;strong&gt;&lt;em&gt;things&lt;/em&gt;&lt;/strong&gt;”  and then using “&lt;strong&gt;&lt;em&gt;parts-of-speech tagging&lt;/em&gt;&lt;/strong&gt;” the machine would parse out “brown”, “quick” and “lazy” as an &lt;strong&gt;adjective&lt;/strong&gt; and “jumps” as a &lt;strong&gt;verb&lt;/strong&gt;. These two methods within NLP alone just help the machine identify components in the natural language but not really give the machine the ability to understand the sentence.&lt;/p&gt;

&lt;p&gt;This is where the subtopic &lt;strong&gt;Natural Language Understanding&lt;/strong&gt; is introduced. NLU trains the machine to have reading comprehension over the natural language that it processes. A method in NLU called “&lt;strong&gt;Question and Answer&lt;/strong&gt;” teaches the machines how to respond to questions based on the data it receives. In the context of my example sentence, if I asked the machine "What color is the fox?", it would answer back "The fox is brown." Q&amp;amp;A models can answer "definition" questions, "how and why" questions and semantically constrained questions. Also Q&amp;amp;A models can be trained on everything, which is called “&lt;strong&gt;open domain&lt;/strong&gt;” or it can be trained on a particular subject, “&lt;strong&gt;closed domain&lt;/strong&gt;”. For the model that I will be building it is going to be a closed domain since it will only be able to answer questions about the three ladies in the AI/ML industry and it will be able to answer "definition" questions and "how and why" questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Bot
&lt;/h2&gt;

&lt;p&gt;To build my Twitch Bot there are three major components that I will have to piece together: connecting to my own personal Twitch channel, setting up a cache database for the data and building the "Question and Answer" model. Twitch has a very flexible API that allows for me to read and post messages to my channel. The library I chose to interface with their API is &lt;strong&gt;TMI.js&lt;/strong&gt;, a Twitch Message Interface library. My data is several text files that are less than 100KB. Each time my model runs, it will retrieve the data and conduct its analysis. To speed up this processing I am going to create an in-memory cache database. Cache Databases increases processing time and decreases the time it takes to retrieve data. The library I chose was &lt;strong&gt;memory-cache.js&lt;/strong&gt;, a simple in-memory cache for Node.js. &lt;/p&gt;

&lt;p&gt;Finally, I will be using the popular open-source machine-learning library called &lt;strong&gt;Tensorflow&lt;/strong&gt;. Tensorflow was originally written in Python but it has grown to other programming languages.  The &lt;strong&gt;Tensforflow.js&lt;/strong&gt; is written in &lt;strong&gt;Node.js&lt;/strong&gt; and contains pre-trained models. There are two versions of Tensorflow.js: CPU and GPU. All of this means that the model will run on the CPU or GPU of my machine. Running models on GPU has been known for faster processing. For my model I am going to use CPU. The libraries that I will use to support my model are &lt;strong&gt;@tensorflow-models/qna.js&lt;/strong&gt;, &lt;strong&gt;@tensorflow/tfjs&lt;/strong&gt; and &lt;strong&gt;@tensorflow/tfjs-node&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Twitch
&lt;/h2&gt;

&lt;p&gt;The first step I have to do is initialize my Twitch client.  &lt;strong&gt;TMI.js&lt;/strong&gt; requires me to provide my username, password and channel in order to view and respond to messages in my Twitch channel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// src/index.js
require("dotenv").config()
const tmi = require("tmi.js")

const client = new tmi.Client( {
   connection: {
     secure: true,
     reconnect: true
  },
  identity: {
    username: process.env.TWITCH_USERNAME,
    password: process.env.TWITCH_AUTH
  },
  channels: [ process.env.TWITCH_CHANNEL ]
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since those three items are sensitive information, I will place them in an .env file. "&lt;strong&gt;dotenv&lt;/strong&gt;" is a library that loads environment files to the process.env object. It is very secure and useful for configuring servers for different environments. Another configuration that I can set with my Twitch client is the connection type. To follow best practices I am going to set my connection to be secure (HTTPS). I am also going to set my Twitch client to constantly reconnect if there was a drop in connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Out Model
&lt;/h2&gt;

&lt;p&gt;After initializing my Twitch client, the next thing to do is to build out my model. Earlier I mentioned that Tensorflow.js uses pre-trained models. A &lt;strong&gt;pre-trained model&lt;/strong&gt; is a model that is developed by another party that serves as a foundation model to expand on. For example, if I want to build a model that can detect cat breeds in pictures. I have the option to build the model from scratch. This means I will have to train the model to know the difference between cats and other objects. This requires over millions of images of all types including cats. Yet if I used a pre-trained model it will already know the difference between cats and objects. This gives me the opportunity to build models faster to solve problems. For my use case, I am going to use a pre-trained model for "Question and Answer."&lt;/p&gt;

&lt;p&gt;Before I can use the model, I would need to feed it my own data. The data will be stored in a text file. The set back with doing that is every time the model runs the server would have to read the file system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// src/model/nlu.js
const fs = require('fs')
const path = require('path')
const cache = require('memory-cache')

const memCache = new cache.Cache();

(async function () {
   [
    path.join(__dirname, '..', 'data', 'joy-buolamwini.txt'),
    path.join(__dirname, '..', 'data', 'rediet-abebe.txt'),
    path.join(__dirname, '..', 'data', 'timnit-gebru.txt')
   ].map( function (file) {
    fs.readFile(file, 'utf-8', function(err, data) {
        if(err) {
            console.error('Error', err)
        else {
            const key = file.split('/')[file.split('/').length-1].split('')[0]
            memCache.put(key,data)
        }
    });
   });
})()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To bypass this I can store the data in memory and retrieve the data from the memory cache. I will do this by creating a self-invoke function. This will read the text files and place it in the memory cache. I will retrieve the information using the file name as its key.&lt;br&gt;
 &lt;br&gt;
Now that I have my data in place, I can piece together the model. My model is going to answer questions from three topics. Instead of tailoring the code to all three topics, I am going to make a one-size fits all function. The function will capture the name of the topic and presented question.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// src/model/nlu.js
require('@tensorflow/tfjs-node');
const tf = require('@tensorflow/tfjs');
const QNA = require('@tensorflow-models/qna');

async function qnaModel ( name, question) {
    try {
        await tf.ready();

        let model = await QNA.load();
        const text = memCache.get(name);
        let answers = await model.findAnswers(question, text);

        if(answers.length &amp;lt;1) {
            return console.log('Can you rephrase the question?')
        }
        return answers[0];
    } catch(e) {
        console.log('Model encountered an error', e)
    }
}

module.exports = {
   qnaModel
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am going to name the function "&lt;strong&gt;qnaModel&lt;/strong&gt;." The first thing it will do is initialize the Q&amp;amp;A pre-trained model from Tensorflow. Then based on the name that it was given, it will retrieve the associated text from the memory cache. That information, along with the question is fed into Tensorflows function, called "&lt;strong&gt;findAnswers&lt;/strong&gt;." This function takes two parameters, the data and the question. In return, it will provide me with an array of objects. Here is an example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[{
  text: "August 20, 1908",
  startIndex: 135,
  endIndex: 147,
  score: 0.0941282522248868
}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The object has four properties: &lt;strong&gt;text&lt;/strong&gt;, &lt;strong&gt;startIndex&lt;/strong&gt;, &lt;strong&gt;endIndex&lt;/strong&gt; and &lt;strong&gt;score&lt;/strong&gt;. The "&lt;strong&gt;text&lt;/strong&gt;" displays the answer that the model developed. Both “&lt;strong&gt;startIndex&lt;/strong&gt;” and “&lt;strong&gt;endIndex&lt;/strong&gt;” points to where the model found the answer in the text. "&lt;strong&gt;Score&lt;/strong&gt;" measures how confident the model is with its answer. The closer the number is to 1 the more confident the model is with the answer. By default the first entry in the array will have the higher score from the model. In the code I decide to place a conditional statement. If the array is greater than one, return the first element in the array. Finally I export this function for my Twitch client to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect Model to Twitch Bot
&lt;/h2&gt;

&lt;p&gt;The Twitch API has several events that I can use to interact with in my Twitch channel. I will need to use a channel event that allows for me to listen for "messages". From there I can analyze these messages and determine whether a user is trying to interact with my bot. I am going to look for messages that contain &lt;strong&gt;!joybuolamwini&lt;/strong&gt; &lt;strong&gt;!redietabebe&lt;/strong&gt; and &lt;strong&gt;!timnitgebru&lt;/strong&gt;. Those messages will have a question preceding it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// src/index.js
const qnaModel = require('./model/nlp').qnaModel;

client.on('message', (channel, tags, message, self) =&amp;gt; {
    if(self) return;

    if(message.includes('!redietabebe')){
        const cleanQuestion = message.replace('!redietabebe', '');
    }

    if(message.includes('!joybuolamwini')) {
        const cleanQuestion = message.replace('!joybuolamwini','');
      }

      if(message.includes('!timnitgebru')){
            const cleanQuestion = message.replace('!timnitgebru','');
      }
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then finally I am going to pull in the function that does the data analysis for my questions. In the code snippet I am going to drop all properties except "text." This is because I only want to display the answer in my Twitch channel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// src/index.js
const response = async () =&amp;gt; {
    try {
        const answer = await qnaModel('rediet-abebe', cleanQuestion);
        if(answer.text === undefined || !answer.text){
            return 'I am sorry we could not find the answer';
        }
        return answer.text;
    } catch(e) {
        return e.toString();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that I am going to send the answer to my Twitch channel. To do this I use the "&lt;strong&gt;say&lt;/strong&gt;" method. This method takes in two parameters: &lt;strong&gt;channel name&lt;/strong&gt; and &lt;strong&gt;message&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response().then(data =&amp;gt; {
   client.say(channel, data);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following snippet shows everything together in the src/index.js file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require('dotenv').config()
const tmi = require('tmi.js')
const fs = require('fs');
const path = require('path');
const cache = require('memory-cache');
const qnaModel = require('./model/nlp').qnaModel;

let memCache = new cache.Cache();

const client = new tmi.Client({
   connection: {
       secure: true,
       reconnect: true
   },
   identity: {
       username: process.env.TWITCH_USERNAME,
       password: process.env.TWITCH_AUTH
   },
   channels: [process.env.TWITCH_CHANNEL]
})

client.connect();

client.on('message', (channel, tags, message, self) =&amp;gt; {

   if(self) return;

   if(message.includes('!redietabebe')){
       const cleanQuestion = message.replace('!redietabebe', '');
       const response = async () =&amp;gt; {
           try {
               const answer = await qnaModel('rediet-abebe', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data =&amp;gt; {
           client.say(channel, data);
       });

   }

   if(message.includes('!timnitgebru')){
       const cleanQuestion = message.replace('!timnitgebru', '');
       const response = async () =&amp;gt; {
           try {
               const answer = await qnaModel('timnit-gebru', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data =&amp;gt; {
           client.say(channel, data);
       });
   }

   if(message.includes('!joybuolamwini')){
       const cleanQuestion = message.replace('!joybuolamwini', '');

       const response = async () =&amp;gt; {
           try {
               const answer = await qnaModel('joy-buolamwini', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data =&amp;gt; {
           client.say(channel, data);
       });
   }
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here is a short clip on my Twitch bot working within my Twitch channel.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/bFDfUJGZV9M"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The code above is my first iteration in combining NLU and Twitch API. There are &lt;strong&gt;several&lt;/strong&gt; improvements that I can make to this code. An example would be caching answers to common questions. This would cut down on the server and model's response time. Nevertheless, I wanted to share my journey in experimenting with Natural Language Understanding. I hope this tutorial gives you inspiration to dabble with Tensorflow using Javascript. You can view the full code here. If you enjoyed this tutorial, check out my technology streams on &lt;strong&gt;&lt;a href="http://www.twitch.tv/thestrugglingblack" rel="noopener noreferrer"&gt;www.twitch.tv/thestrugglingblack&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>twitch</category>
      <category>nlu</category>
      <category>machinelearning</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Beginner-Friendly Introduction to GitLab CI/CD</title>
      <dc:creator>Zuri Hunter</dc:creator>
      <pubDate>Sat, 19 Jan 2019 22:54:52 +0000</pubDate>
      <link>https://dev.to/zurihunter/beginner-friendly-introduction-to-gitlabcicd-4p5a</link>
      <guid>https://dev.to/zurihunter/beginner-friendly-introduction-to-gitlabcicd-4p5a</guid>
      <description>&lt;p&gt;The goal of this tutorial is to give a high-level introduction of GitLab CI/CD that helps people get started in 30 minutes without having to read all of GitLab's documentation. This tutorial is geared toward beginners who wish to tinker with CI/CD tools like GitLab CI/CD. In this tutorial, I will briefly go over what is CI/CD, why I decided to go with GitLab's tool and a walkthrough on how to create a &lt;code&gt;.gitlab-ci.yaml&lt;/code&gt; with an example application.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD
&lt;/h3&gt;

&lt;p&gt;CI/CD is short for &lt;strong&gt;Continuous Integration/ Continuous Delivery / Continuous Deployment&lt;/strong&gt;. It enables teams to build, test and release software at a faster rate. CI/CD removes manual human interactions where possible - automating everything except the final manual code deployment to production. One of the challenges of implementing this practice is integrating the various tools and systems required to build a CI/CD pipeline. For example, you might store your code in Bitbucket, test it in automated test suites on private infrastructure, and deploy your application to AWS or Microsoft Azure. Complicated applications residing on multiple systems have contributed to not all organizations implementing a seamless CI/CD pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why GitLab CI/CD?
&lt;/h3&gt;

&lt;p&gt;I use GitLab CI/CD for three reasons: I can build a complete CI/CD pipeline solution with one tool, it's fast, and it's open source. With GitLab CI/CD in the same place, I can create tickets, merge requests, write code and setup CI/CD tools without another application. It's essentially a one-stop shop. GitLab CI/CD runs builds on &lt;strong&gt;GitLab Runners&lt;/strong&gt;. &lt;strong&gt;Runners&lt;/strong&gt; are isolated virtual machines that run predefined steps through the GitLab CI API. This tool, alone, allows for projects to run through the pipeline builds faster, compared to running on a single instance. You can learn more details about GitLab Runners in this &lt;a href="https://docs.gitlab.com/runner/" rel="noopener noreferrer"&gt;link&lt;/a&gt;. Finally, it's open source, so I can always contribute to the code base, and create a new issue when a problem arises.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario
&lt;/h3&gt;

&lt;p&gt;Let's say we have a Node.js API that retrieves a list of books in a database. We can create a pipeline that pushes our code through three phases: build, test and deploy. A &lt;strong&gt;pipeline&lt;/strong&gt; is a group of steps that are grouped by similar characteristics. With those phases our pipeline is defined by three types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Project Pipeline&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Continuous Integration Pipeline&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deploy Pipeline&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;Project Pipeline&lt;/strong&gt; installs dependencies, runs linters and any scripts that deal with the code. The &lt;strong&gt;Continuous Integration Pipeline&lt;/strong&gt; runs automated tests and builds a distributed version of the code. Finally, the &lt;strong&gt;Deploy Pipeline&lt;/strong&gt; deploys code to a designated cloud provider and environment.&lt;/p&gt;

&lt;p&gt;The steps that the three pipelines execute are called &lt;strong&gt;jobs&lt;/strong&gt;. When you group a series of jobs by those characteristics it is called &lt;strong&gt;stages&lt;/strong&gt;. Jobs are the basic building block for pipelines. They can be grouped together in stages and stages can be grouped together into pipelines. Here's an example hierarchy of jobs, stages, and pipelines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A.) Build
     i. Install NPM Dependencies
     ii. Run ES-Linter
     iii. Run Code-Minifier
B.) Test
     i. Run unit, functional and end-to-end test.
     ii. Run pkg to compile Node.js application
C.) Deploy
     i. Production
        1.) Launch EC2 instance on AWS
     ii. Staging
        1.) Launch on local development server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this hierarchy, all three components are considered three different pipelines. The main bullets --  build, test, and deploy are &lt;strong&gt;stages&lt;/strong&gt; and each bullet under those sections are jobs. Let's break this out into a GitLab CI/CD &lt;code&gt;yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using GitLab CI/CD
&lt;/h3&gt;

&lt;p&gt;To use GitLab CI/CD, create a file called &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; at the root of the project in your GitLab repository and add the following &lt;code&gt;yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: node:10.5.0

stages:
  - build
  - test
  - deploy

before_script:
  - npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I mentioned earlier, GitLab CI/CD uses Runners to execute pipelines. We can define which operating system and predefined libraries we would want our Runner to be based off by using the &lt;code&gt;image&lt;/code&gt; directive. In our instance, we will be using the latest version of Node.js for our Runner. The &lt;code&gt;stages&lt;/code&gt; directive allows us to predefine a stage for the entire configuration. Jobs will be executed based off of the order listed in the &lt;code&gt;stages&lt;/code&gt; directive. To learn more about stages you can view it &lt;a href="https://docs.gitlab.com/ee/ci/yaml/#stages" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The &lt;code&gt;before_script&lt;/code&gt; directive is used to run a command before all jobs.&lt;/p&gt;

&lt;p&gt;Now let's start with our job dedicated to the &lt;strong&gt;Build&lt;/strong&gt; stage. We are going to call this job &lt;code&gt;build-min-code&lt;/code&gt;. In this job we want it to install dependencies and minify the code. We can start this off with using the &lt;code&gt;script&lt;/code&gt; directive. The &lt;code&gt;script&lt;/code&gt; directive is a shell script that gets executed within the Runner. Then we are going to assign this job to the "build" stage. To assign a job to a stage, use the &lt;code&gt;stage&lt;/code&gt; directive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build-min-code:
  stage: build
  script:
    - npm install
    - npm run minifier

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have a job associated with our &lt;strong&gt;Build&lt;/strong&gt; stage and we are going to do that for our &lt;strong&gt;Test&lt;/strong&gt; stage. Our test job is going to be called &lt;code&gt;run-unit-test&lt;/code&gt; and we are going to use the npm script in our API to run a test &lt;code&gt;npm test&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;run-unit-test:
  stage: test
  script:
    - npm run test

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we are going to add a job to handle our &lt;strong&gt;Deploy&lt;/strong&gt; stage: &lt;code&gt;deploy-production&lt;/code&gt;, &lt;code&gt;deploy-staging&lt;/code&gt;. In this instance, we are going to have two different jobs for deployment (staging and production). These jobs will reflect the same layout as our previous jobs but with a small change. Currently, all of our jobs are &lt;strong&gt;automatically&lt;/strong&gt; set to be triggered on any code push or branch. We do not want to have that for when we deploy our code to staging and production. To prevent that from happening we use the &lt;code&gt;only&lt;/code&gt; directive. The &lt;code&gt;only&lt;/code&gt; directive defines the names of branches and tags for which the job will run. The job will look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy-staging:
 stage: deploy
 script:
   - npm run deploy-stage
 only:
   - develop

deploy-production:
 stage: deploy
 script:
   - npm run deploy-prod
 only:
   - master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our &lt;code&gt;deploy-staging&lt;/code&gt; job, the Runner will only execute it if there was a change to the &lt;code&gt;develop&lt;/code&gt; branch and for &lt;code&gt;deploy-production&lt;/code&gt; the &lt;code&gt;master&lt;/code&gt; branch. Here is a screenshot below that shows a code push made to the &lt;code&gt;master&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm59przgns43m04b7020.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm59przgns43m04b7020.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this image, all three stages and jobs are triggered with the exception of &lt;code&gt;deploy-staging&lt;/code&gt; since the code push was to the &lt;code&gt;master&lt;/code&gt; branch. GitLab CI/CD comes with an intuitive interface to help show what jobs and stages are running and what errors are occurring in the midst of the build. Below is the final version of the &lt;code&gt;.gitlab-ci.yaml&lt;/code&gt; file. If you wish to test this out yourself, here is the &lt;a href="https://gitlab.com/zh-examples/gitlabci-demo" rel="noopener noreferrer"&gt;link&lt;/a&gt; to the example application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: node:10.5.0

stages:
  - build
  - test
  - deploy

before_script:
  - npm install

build-min-code:
  stage: build
  script:
    - npm install
    - npm run minifier

run-unit-test:
  stage: test
  script:
    - npm run test

deploy-staging:
  stage: deploy
  script:
    - npm run deploy-stage
  only:
    - develop

deploy-production:
  stage: deploy
  script:
    - npm run deploy-prod
  only:
    - master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The items covered above is a high-level overview of the capabilities that GitLab CI/CD can offer. GitLab CI/CD has the ability to have a more in-depth control of the automation of codebases by building and publishing Docker images to integrating with third-party tools. I hope that you found this tutorial helpful. Thanks for reading!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ci</category>
      <category>beginners</category>
      <category>intro</category>
    </item>
    <item>
      <title>Hi, I'm Zuri Hunter</title>
      <dc:creator>Zuri Hunter</dc:creator>
      <pubDate>Wed, 08 Mar 2017 15:30:21 +0000</pubDate>
      <link>https://dev.to/zurihunter/hi-im-zuri-hunter</link>
      <guid>https://dev.to/zurihunter/hi-im-zuri-hunter</guid>
      <description>&lt;p&gt;I have been coding for 2 years (self-taught)&lt;/p&gt;

&lt;p&gt;You can find me on Twitter as &lt;a href="https://twitter.com/ZuriHunter" rel="noopener noreferrer"&gt;@ZuriHunter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can view my latest work on Github &lt;a href="https://github.com/ZuriHunter" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I work for Digital Globe&lt;/p&gt;

&lt;p&gt;I mostly program in these languages: Javascript, Ruby, Python and Java&lt;/p&gt;

&lt;p&gt;I am currently learning more about React.js and Docker&lt;/p&gt;

&lt;p&gt;Nice to meet you!&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
