<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zulaikha lateef</title>
    <description>The latest articles on DEV Community by Zulaikha lateef (@zulaikha12).</description>
    <link>https://dev.to/zulaikha12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zulaikha12"/>
    <language>en</language>
    <item>
      <title>Jenkins Pipeline Tutorial: A Beginner’s Guide To Continuous Delivery</title>
      <dc:creator>Zulaikha lateef</dc:creator>
      <pubDate>Sun, 12 May 2019 11:55:58 +0000</pubDate>
      <link>https://dev.to/zulaikha12/jenkins-pipeline-tutorial-a-beginner-s-guide-to-continuous-delivery-1hf1</link>
      <guid>https://dev.to/zulaikha12/jenkins-pipeline-tutorial-a-beginner-s-guide-to-continuous-delivery-1hf1</guid>
      <description>&lt;h1&gt;
  
  
  Jenkins Pipeline Tutorial
&lt;/h1&gt;

&lt;p&gt;We’re all aware that Jenkins has proven to be an expert in implementing continuous integration, continuous testing and continuous deployment to produce good quality software. When it comes to continuous delivery, Jenkins uses a feature called Jenkins pipeline. In order to understand why Jenkins pipeline was introduced, we have to understand what continuous delivery is and why it is important.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9cbpw1nvpmb0cmyc4j9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9cbpw1nvpmb0cmyc4j9.png" alt="Alt text of image" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In simple words, continuous delivery is the capability to release a software at all times. It is a practice which ensures that the software is always in a production-ready state.&lt;/p&gt;

&lt;p&gt;What does this mean? It means that every time a change is made to the code or the infrastructure, the software team must work in such a way that these changes are built quickly and tested using various automation tools after which the build is subjected to production.&lt;/p&gt;

&lt;p&gt;By speeding up the delivery process, the development team will get more time to implement any required feedback. This process, of getting the software from the build to the production state at a faster rate is carried out by implementing continuous integration and continuous delivery.&lt;/p&gt;

&lt;p&gt;Continuous delivery ensures that the software is built, tested and released more frequently. It reduces the cost, time and risk of the incremental software releases. To carry out continuous delivery, Jenkins introduced a new feature called Jenkins pipeline. This Jenkins pipeline tutorial will help you understand the importance of a Jenkins pipeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Jenkins pipeline?
&lt;/h1&gt;

&lt;p&gt;A pipeline is a collection of jobs that brings the software from version control into the hands of the end users by using automation tools. It is a feature used to incorporate continuous delivery in our software development workflow.&lt;/p&gt;

&lt;p&gt;Over the years, there have been multiple Jenkins pipeline releases including, Jenkins Build flow, Jenkins Build Pipeline plugin, Jenkins Workflow, etc. What are the key features of these plugins?&lt;/p&gt;

&lt;p&gt;They represent multiple Jenkins jobs as one whole workflow in the form of a pipeline.&lt;br&gt;
What do these pipelines do? These pipelines are a collection of Jenkins jobs which trigger each other in a specified sequence.&lt;br&gt;
Let me explain this with an example. Suppose I’m developing a small application on Jenkins and I want to build, test and deploy it. To do this, I will allot 3 jobs to perform each process. So, job1 would be for build, job2 would perform tests and job3 for deployment. I can use the Jenkins build pipeline plugin to perform this task. After creating three jobs and chaining them in a sequence, the build plugin will run these jobs as a pipeline.&lt;/p&gt;

&lt;p&gt;This image shows a view of all the 3 jobs that run concurrently in the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd9x7lwmt1m4sm2ebkvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd9x7lwmt1m4sm2ebkvo.png" alt="Alt text of image" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach is effective for deploying small applications. But what happens when there are complex pipelines with several processes (build, test, unit test, integration test, pre-deploy, deploy, monitor) running 100’s of jobs?&lt;/p&gt;

&lt;p&gt;The maintenance cost for such a complex pipeline is huge and increases with the number of processes. It also becomes tedious to build and manage such a vast number of jobs. To overcome this issue, a new feature called Jenkins Pipeline Project was introduced.&lt;/p&gt;

&lt;p&gt;The key feature of this pipeline is to define the entire deployment flow through code. What does this mean? It means that all the standard jobs defined by Jenkins are manually written as one whole script and they can be stored in a version control system. It basically follows the ‘pipeline as code’ discipline. Instead of building several jobs for each phase, you can now code the entire workflow and put it in a Jenkinsfile. Below is a list of reasons why you should use the Jenkins Pipeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  Jenkins Pipeline Advantages
&lt;/h1&gt;

&lt;p&gt;-It models simple to complex pipelines as code by using Groovy DSL (Domain Specific Language)&lt;br&gt;
-The code is stored in a text file called the Jenkinsfile which can be checked into a SCM (Source Code Management)&lt;br&gt;
-Improves user interface by incorporating user input within the pipeline&lt;br&gt;
-It is durable in terms of unplanned restart of the Jenkins master&lt;br&gt;
-It can restart from saved checkpoints&lt;br&gt;
-It supports complex pipelines by incorporating conditional loops, fork or join operations and allowing tasks to be performed in parallel&lt;br&gt;
-It can integrate with several other plugins&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Jenkinsfile?
&lt;/h1&gt;

&lt;p&gt;A Jenkinsfile is a text file that stores the entire workflow as code and it can be checked into a SCM on your local system. How is this advantageous? This enables the developers to access, edit and check the code at all times.&lt;/p&gt;

&lt;p&gt;The Jenkinsfile is written using the Groovy DSL and it can be created through a text/groovy editor or through the configuration page on the Jenkins instance. It is written based on two syntaxes, namely:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Declarative pipeline syntax&lt;/li&gt;
&lt;li&gt;Scripted pipeline syntax&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Declarative pipeline is a relatively new feature that supports the pipeline as code concept. It makes the pipeline code easier to read and write. This code is written in a Jenkinsfile which can be checked into a source control management system such as Git.&lt;/p&gt;

&lt;p&gt;Whereas, the scripted pipeline is a traditional way of writing the code. In this pipeline, the Jenkinsfile is written on the Jenkins UI instance. Though both these pipelines are based on the groovy DSL, the scripted pipeline uses stricter groovy based syntaxes because it was the first pipeline to be built on the groovy foundation. Since this Groovy script was not typically desirable to all the users, the declarative pipeline was introduced to offer a simpler and more optioned Groovy syntax.&lt;/p&gt;

&lt;p&gt;The declarative pipeline is defined within a block labelled ‘pipeline’ whereas the scripted pipeline is defined within a ‘node’. This will be explained below with an example.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pipeline concepts
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Pipeline
&lt;/h2&gt;

&lt;p&gt;This is a user-defined block which contains all the processes such as build, test, deploy, etc. It is a collection of all the stages in a Jenkinsfile. All the stages and steps are defined within this block. It is the key block for a declarative pipeline syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru3y1jh5detvaz9uzi9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru3y1jh5detvaz9uzi9x.png" alt="Alt text of image" width="289" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Node
&lt;/h2&gt;

&lt;p&gt;A node is a machine that executes an entire workflow. It is a key part of the scripted pipeline syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98a122mkf624ummvytvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98a122mkf624ummvytvq.png" alt="Alt text of image" width="290" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are various mandatory sections which are common to both the declarative and scripted pipelines, such as stages, agent and steps that must be defined within the pipeline. These are explained below:&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent
&lt;/h2&gt;

&lt;p&gt;An agent is a directive that can run multiple builds with only one instance of Jenkins. This feature helps to distribute the workload to different agents and execute several projects within a single Jenkins instance. It instructs Jenkins to allocate an executor for the builds.&lt;/p&gt;

&lt;p&gt;A single agent can be specified for an entire pipeline or specific agents can be allotted to execute each stage within a pipeline. Few of the parameters used with agents are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Any&lt;/strong&gt;&lt;br&gt;
Runs the pipeline/ stage on any available agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;None&lt;/strong&gt;&lt;br&gt;
This parameter is applied at the root of the pipeline and it indicates that there is no global agent for the entire pipeline and each stage must specify its own agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Label&lt;/strong&gt;&lt;br&gt;
Executes the pipeline/stage on the labelled agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;br&gt;
This parameter uses docker container as an execution environment for the pipeline or a specific stage. In the below example I’m using docker to pull an ubuntu image. This image can now be used as an execution environment to run multiple commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7j8yxvc3jt138fh6yp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7j8yxvc3jt138fh6yp1.png" alt="Alt text of image" width="300" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stages
&lt;/h2&gt;

&lt;p&gt;This block contains all the work that needs to be carried out. The work is specified in the form of stages. There can be more than one stage within this directive. Each stage performs a specific task. In the following example, I’ve created multiple stages, each performing a specific task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqewihgskv3339kefz5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqewihgskv3339kefz5g.png" alt="Alt text of image" width="680" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;p&gt;A series of steps can be defined within a stage block. These steps are carried out in sequence to execute a stage. There must be at least one step within a steps directive. In the following example I’ve implemented an echo command within the build stage. This command is executed as a part of the ‘Build’ stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1zvm19crqhfxy4i17t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic1zvm19crqhfxy4i17t.png" alt="Alt text of image" width="679" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you are familiar with the basic pipeline concepts let’s start of with the Jenkins pipeline tutorial. Firstly, let’s learn how to create a Jenkins pipeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating your first Jenkins pipeline.
&lt;/h1&gt;

&lt;p&gt;Step 1: Log into Jenkins and select ‘New item’ from the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqvbbibs278mvy7gc0iw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqvbbibs278mvy7gc0iw.png" alt="Alt text of image" width="370" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Next, enter a name for your pipeline and select ‘pipeline’ project. Click on ‘ok’ to proceed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu84rx7ajnj6ufzep5n2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu84rx7ajnj6ufzep5n2r.png" alt="Alt text of image" width="768" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Scroll down to the pipeline and choose if you want a declarative pipeline or a scripted one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6odvzoh1drwz91qd0d11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6odvzoh1drwz91qd0d11.png" alt="Alt text of image" width="768" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4a: If you want a scripted pipeline then choose ‘pipeline script’ and start typing your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ekhh7zbporzc1jqyad3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ekhh7zbporzc1jqyad3.png" alt="Alt text of image" width="768" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4b: If you want a declarative pipeline then select ‘pipeline script from SCM’ and choose your SCM. In my case, I’m going to use Git throughout this demo. Enter your repository URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v7luuyomxit4tdewzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v7luuyomxit4tdewzh.png" alt="Alt text of image" width="768" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: Within the script path is the name of the Jenkinsfile that is going to be accessed from your SCM to run. Finally, click on ‘apply’ and ‘save’. You have successfully created your first Jenkins pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb59qw8agk4yd7udifnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb59qw8agk4yd7udifnv.png" alt="Alt text of image" width="768" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Declarative Pipeline Demo
&lt;/h1&gt;

&lt;p&gt;The first part of the demo shows the working of a declarative pipeline. Refer the above ‘Creating your first Jenkins pipeline’ to start. Let me start the demo by explaining the code I’ve written in my Jenkinsfile.&lt;/p&gt;

&lt;p&gt;Since this is a declarative pipeline, I’m writing the code locally in a file named ‘Jenkinsfile’ and then pushing this file into my global git repository. While executing the ‘Declarative pipeline’ demo, this file will be accessed from my git repository. The following is a simple demonstration of building a pipeline to run multiple stages, each performing a specific task.&lt;/p&gt;

&lt;p&gt;-The declarative pipeline is defined by writing the code within a pipeline block. -Within the block I’ve defined an agent with the tag ‘any’. This means that the pipeline is run on any available executor.&lt;br&gt;
-Next, I’ve created four stages, each performing a simple task.&lt;br&gt;
-Stage one executes a simple echo command which is specified within the ‘steps’ block.&lt;br&gt;
-Stage two executes an input directive. This directive allows to prompt a user input in a stage. It displays a message and waits for the user input. If the input is approved, then the stage will trigger further deployments.&lt;br&gt;
-In this demo a simple input message ‘Do you want to proceed?’ is displayed. On receiving the user input the pipeline either proceeds with the execution or aborts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3in753njm1ivwqmqf71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3in753njm1ivwqmqf71.png" alt="Alt text of image" width="768" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-Stage three runs a ‘when’ directive with a ‘not’ tag. This directive allows you to execute a step depending on the conditions defined within the ‘when’ loop. If the conditions are met, the corresponding stage will be executed. It must be defined at a stage level.&lt;br&gt;
-In this demo, I’m using a ‘not’ tag. This tag executes a stage when the nested condition is false. Hence when the ‘branch is master’ holds false, the echo command in the following step is executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5x37rirsk30mith848b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5x37rirsk30mith848b.png" alt="Alt text of image" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;pipeline {&lt;br&gt;
         agent any&lt;br&gt;
         stages {&lt;br&gt;
                 stage('One') {&lt;br&gt;
                 steps {&lt;br&gt;
                     echo 'Hi, this is Zulaikha from edureka'&lt;br&gt;
                 }&lt;br&gt;
                 }&lt;br&gt;
                 stage('Two') {&lt;br&gt;
                 steps {&lt;br&gt;
                    input('Do you want to proceed?')&lt;br&gt;
                 }&lt;br&gt;
                 }&lt;br&gt;
                 stage('Three') {&lt;br&gt;
                 when {&lt;br&gt;
                       not {&lt;br&gt;
                            branch "master"&lt;br&gt;
                       }&lt;br&gt;
                 }&lt;br&gt;
                 steps {&lt;br&gt;
                       echo "Hello"&lt;br&gt;
                 }&lt;br&gt;
                 }&lt;br&gt;
                 stage('Four') {&lt;br&gt;
                 parallel { &lt;br&gt;
                            stage('Unit Test') {&lt;br&gt;
                           steps {&lt;br&gt;
                                echo "Running the unit test..."&lt;br&gt;
                           }&lt;br&gt;
                           }&lt;br&gt;
                            stage('Integration test') {&lt;br&gt;
                              agent {&lt;br&gt;
                                    docker {&lt;br&gt;
                                            reuseNode true&lt;br&gt;
                                            image 'ubuntu'&lt;br&gt;
                                           }&lt;br&gt;
                                    }&lt;br&gt;
                              steps {&lt;br&gt;
                                echo "Running the integration test..."&lt;br&gt;
                              }&lt;br&gt;
                           }&lt;br&gt;
                           }&lt;br&gt;
                           }&lt;br&gt;
              }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;-Stage four runs a parallel directive. This directive allows you to run nested stages in parallel. Here, I’m running two nested stages in parallel, namely, ‘Unit test’ and ‘Integration test’. Within the integration test stage, I’m defining a stage specific docker agent. This docker agent will execute the ‘Integration test’ stage.&lt;br&gt;
-Within the stage are two commands. The reuseNode is a Boolean and on returning true, the docker container would run on the agent specified at the top-level of the pipeline, in this case the agent specified at the top-level is ‘any’ which means that the container would be executed on any available node. By default this Boolean returns false.&lt;br&gt;
-There are some restrictions while using the parallel directive:&lt;/p&gt;

&lt;p&gt;A stage can either have a parallel or steps block, but not both&lt;/p&gt;

&lt;p&gt;Within a parallel directive you cannot nest another parallel directive&lt;/p&gt;

&lt;p&gt;If a stage has a parallel directive then you cannot define ‘agent’ or ‘tool’ directives&lt;/p&gt;

&lt;p&gt;Now that I’ve explained the code, lets run the pipeline. The following screenshot is the result of the pipeline. In the below image, the pipeline waits for the user input and on clicking ‘proceed’, the execution resumes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbtsfo9kqybu2re7oltc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbtsfo9kqybu2re7oltc.png" alt="Alt text of image" width="669" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y00oylm0az5yzzgo7q5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y00oylm0az5yzzgo7q5.png" alt="Alt text of image" width="768" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Scripted Pipeline Demo
&lt;/h1&gt;

&lt;p&gt;To give you a basic understanding of the scripted pipeline lets execute a simple code. Refer to Creating your first Jenkins pipeline to create the scripted pipeline. I will run the following script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foizn5sm2faf80d9us1e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foizn5sm2faf80d9us1e0.png" alt="Alt text of image" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;node {&lt;br&gt;
      for (i=0; i&amp;lt;2; i++) { &lt;br&gt;
           stage "Stage #"+i&lt;br&gt;
           print 'Hello, world !'&lt;br&gt;
           if (i==0)&lt;br&gt;
           {&lt;br&gt;
               git "&lt;a href="https://github.com/Zulaikha12/gitnew.git" rel="noopener noreferrer"&gt;https://github.com/Zulaikha12/gitnew.git&lt;/a&gt;"&lt;br&gt;
               echo 'Running on Stage #0'&lt;br&gt;
           }&lt;br&gt;
           else {&lt;br&gt;
               build 'Declarative pipeline'&lt;br&gt;
               echo 'Running on Stage #1'&lt;br&gt;
           }&lt;br&gt;
      }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;In the above code I have defined a ‘node’ block within which I’m running the following:&lt;/p&gt;

&lt;p&gt;-The conditional ‘for’ loop. This for loop is for creating 2 stages namely, Stage #0 and Stage #1. Once the stages are created they print the ‘hello world!’ message&lt;br&gt;
-Next, I’m defining a simple ‘if else’ statement. If the value of ‘i’ equals to zero, then stage #0 will execute the following commands (git and echo). A ‘git’ command is used to clone the specified git directory and the echo command simply displays the specified message&lt;br&gt;
-The else statement is executed when ‘i’ is not equal to zero. Therefore, stage #1 will run the commands within the else block. The ‘build’ command simply runs the job specified, in this case it runs the ‘Declarative pipeline’ that we created earlier in the demo. Once it completes the execution of the job, it runs the echo command.&lt;/p&gt;

&lt;p&gt;Now that I’ve explained the code, lets run the pipeline. The following screenshot is the result of the Scripted pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shows the results of Stage #0&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffsmlqe6eizlmmj8cax0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffsmlqe6eizlmmj8cax0.png" alt="Alt text of image" width="768" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shows the logs of Stage #1 and starts building the ‘Declarative pipeline’&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ynuczs1jk55fghshun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ynuczs1jk55fghshun.png" alt="Alt text of image" width="768" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execution of the ‘Declarative pipeline’ job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3uazh6d2ne9a9lvk8sd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3uazh6d2ne9a9lvk8sd.png" alt="Alt text of image" width="768" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme4xzqdhv5yh15f55rxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme4xzqdhv5yh15f55rxl.png" alt="Alt text of image" width="768" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope this blog helped you understand the basics of scripted and declarative pipeline. &lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://www.edureka.co/blog" rel="noopener noreferrer"&gt;Edureka&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>continuosdelivery</category>
      <category>jenkinspipeline</category>
      <category>pipelineconcepts</category>
    </item>
    <item>
      <title>Data Science vs Machine Learning – What’s The Difference?</title>
      <dc:creator>Zulaikha lateef</dc:creator>
      <pubDate>Thu, 02 May 2019 05:41:38 +0000</pubDate>
      <link>https://dev.to/zulaikha12/data-science-vs-machine-learning-what-s-the-difference-687</link>
      <guid>https://dev.to/zulaikha12/data-science-vs-machine-learning-what-s-the-difference-687</guid>
      <description>&lt;p&gt;&lt;strong&gt;Data Science vs Machine Learning:&lt;/strong&gt;&lt;br&gt;
Machine Learning and Data Science are the most significant domains in today’s world. All the sci-fi stuff that you see happening in the world is a contribution from fields like Data Science, Artificial Intelligence (AI) and Machine Learning. In this blog on Data Science vs Machine Learning, we’ll discuss the importance and the distinction between Machine Learning and Data Science.&lt;/p&gt;

&lt;p&gt;I’ll be covering the following topics in this Data Science vs Machine learning blog:&lt;/p&gt;

&lt;p&gt;What Is Data Science?&lt;br&gt;
What Is Machine Learning?&lt;br&gt;
Fields Of Data Science&lt;br&gt;
Use Case&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Data Science?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we get into the details of Data Science, let’s understand how data science came into existence. Do you guys remember when most of the data was stored in Excel sheets? They were simpler times because we generated lesser data and the data was structured. Back then simple Business Intelligence (BI) tools were used to analyze and process the data.&lt;/p&gt;

&lt;p&gt;But times have changed. Over 2.5 quintillion bytes of data is created every single day, and this number is only going to grow. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth. Can you imagine how much data that is? How are we going to process this much data?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7crpztuxmpomzf5v85pn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7crpztuxmpomzf5v85pn.png" alt="What Is Data Science" width="768" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not only that, the data generated these days is mostly unstructured or semi-structured and simple BI tools cannot do the work anymore. We need more complex and effective algorithms to process and extract useful insights from the data. This is where Data science comes in.&lt;/p&gt;

&lt;p&gt;Data Science is all about uncovering findings from data, by exploring data at a granular level to mine and understand complex behaviors, trends, patterns and inferences. It’s about surfacing the needful insight that can enable companies to make smarter business decisions.&lt;/p&gt;

&lt;p&gt;For example, surely you have binged watched on Netflix. Netflix data mines movie viewing patterns of its users to understand what drives user interest and uses that to make decisions on which Netflix series to produce.&lt;/p&gt;

&lt;p&gt;Similarly, Target identifies each customer’s shopping behavior by drawing out patterns from their database, this helps them make better marketing decisions.&lt;/p&gt;

&lt;p&gt;Now that you know why Data Science is important, let’s move ahead and discuss what Machine Learning is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Machine Learning?&lt;/strong&gt;&lt;br&gt;
The idea behind Machine Learning is that you teach machines by feeding them data and letting them learn on their own, without any human intervention. To understand Machine Learning, let’s consider a small scenario.&lt;/p&gt;

&lt;p&gt;Let’s say that you’ve enrolled for skating classes and you have no prior experience of skating. Initially, you’d be pretty bad at it because you have no idea about how to skate. But as you observe and pick up more information, you get better. Observing is just another way of collecting data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj52s4nht34edkqj3l641.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj52s4nht34edkqj3l641.png" alt="What Is Machine Learning" width="513" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What Is Machine Learning - Data Science vs Machine Learning - EdurekaWhat Is Machine Learning – Data Science vs Machine Learning – Edureka&lt;/p&gt;

&lt;p&gt;Just like how we humans learn from our observations and experience, machines are also capable of learning on their own when they’re fed a good amount of data. This is exactly how Machine Learning works.&lt;/p&gt;

&lt;p&gt;Machine Learning process of getting machines to automatically learn and improve from experience without being explicitly programmed.&lt;/p&gt;

&lt;p&gt;Machine Learning begins with reading and observing the training data to find useful insights and patterns in order to build a model that predicts the correct outcome. The performance of the model is then evaluated by using the testing data set. This process is carried out until, the machine automatically learns and maps the input to the correct output, without any human intervention.&lt;/p&gt;

&lt;p&gt;I hope you have an idea about what Machine Learning is if you wish to learn more about Machine Learning, check out this video by our Machine Learning experts.&lt;/p&gt;

&lt;p&gt;Before we do the Data Science vs Machine Learning comparison, let’s try to understand the different fields covered under Data Science.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fields Of Data Science&lt;/strong&gt;&lt;br&gt;
Data Science covers a wide spectrum of domains, including Artificial Intelligence (AI), Machine Learning and Deep Learning. Data Science uses various AI, Machine Learning and Deep Learning methodologies in order to analyse data and extract useful insights from it. To make things clearer, let me define these terms for you:&lt;/p&gt;

&lt;p&gt;Artificial Intelligence: Artificial Intelligence is a subset of Data Science which enables machines to stimulate human-like behavior.&lt;/p&gt;

&lt;p&gt;Machine Learning: Machine learning is a sub-field of Artificial Intelligence which provides machines the ability to learn automatically &amp;amp; improve from experience without being explicitly programmed.&lt;/p&gt;

&lt;p&gt;Deep Learning: Deep Learning is a part of Machine learning that uses various computational measure and algorithms inspired by the structure and function of the brain called artificial neural networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2z3barrnffwj59fhvx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2z3barrnffwj59fhvx2.png" alt="Fields Of Data Science" width="528" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, Data Science involves the extraction of knowledge from data. In order to do so, it uses a bunch of different methods from various disciplines, like Machine Learning, AI and Deep Learning. A point to note here is that Data Science is a wider field and does not exclusively rely on these techniques.&lt;/p&gt;

&lt;p&gt;Now that you have a clear distinction between AI, Machine Learning and Deep Learning, let’s discuss a use case wherein we’ll see how Data Science and Machine Learning is used in the working of recommendation engines.&lt;/p&gt;

&lt;p&gt;Use Case: Recommendation Engine:&lt;br&gt;
Before we discuss how Machine learning and Data Science is implemented in a Recommendation system, let’s see what exactly a Recommendation engine is.&lt;/p&gt;

&lt;p&gt;What Is A Recommendation Engine?&lt;br&gt;
Surely, you all have used Amazon for online shopping. Have you noticed that when you look for a particular item on Amazon, you get recommendations for similar products? Well, how does Amazon know this?&lt;/p&gt;

&lt;p&gt;The reason why companies like Amazon, Walmart, Netflix, etc are doing so well is because of how they handle user-generated data.&lt;/p&gt;

&lt;p&gt;A recommendation system narrows down a list of choices for each user, based on their browsing history, ratings, profile details, transaction details, cart details and so on. Such a system provides useful insights about customers shopping patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2knqi5dtb4nl8lgwh5qu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2knqi5dtb4nl8lgwh5qu.jpg" alt="Recommendation Engine" width="768" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each user is given a personalized view of the eCommerce website based on his/her profile and this allows them to select relevant products. For example, if you’re looking for a new laptop on Amazon, you might also want to buy a laptop bag. Based on such associations, Amazon will recommend more products to you.&lt;/p&gt;

&lt;p&gt;Moving ahead, let’s discuss how Data Science and Machine learning are used in a Recommendation engine.&lt;/p&gt;

&lt;p&gt;A Data Science workflow has six well-defined stages:&lt;/p&gt;

&lt;p&gt;Business Requirements&lt;br&gt;
Data Acquisition&lt;br&gt;
Data Wrangling&lt;br&gt;
Data Exploration&lt;br&gt;
Data Modelling&lt;br&gt;
Deployment &amp;amp; Optimization&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Business Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Data Science project always starts with defining the Business requirements. It is important that you understand the problem you are trying to solve. The main focus of this stage is to identify the different goals of your project.&lt;/p&gt;

&lt;p&gt;In our case, the objective is to build a recommendation engine that will suggest relevant items to each customer based on the data generated by them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Data Acquisition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that you’ve defined the objectives of your project, it’s time to start collecting the data. Data can be gathered from different sources, such as explicit sources and implicit sources:&lt;/p&gt;

&lt;p&gt;Explicit Data: This includes data entered by users such as ratings and comments on products&lt;br&gt;
Implicit Data: The purchase history, cart details, search history, etc come under this category&lt;br&gt;
Collecting such data is easy because the users don’t have to do any extra work because they’re already using the application.&lt;br&gt;
Since each user is bound to have a different opinion about a product, their data sets will be distinct.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjrv006tsoj7tlu2idnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjrv006tsoj7tlu2idnb.png" alt="Data Science Process" width="768" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Data Wrangling (Cleaning)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A research was conducted, where a couple of Data Scientists were interviewed about their experience. Majority of them agreed that 50 to 80 percent of their time was spent on cleaning the data. Data cleaning is considered to be one of the most time-consuming tasks in Data Science.&lt;/p&gt;

&lt;p&gt;Data cleaning is the process of removing unrelated and inconsistent data. At this stage, you must convert your data into a desired format so that your Machine learning model can interpret it. It is necessary to get rid of any inconsistencies as they might result in inaccurate outcomes.&lt;/p&gt;

&lt;p&gt;For example, filtering the significant logs from the less significant ones, identifying fake reviews, removing unnecessary comments, missing values, etc. Such issues are dealt with in this stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Data Exploration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data Exploration involves understanding the patterns in the data and retrieving useful insights from it. At this stage, each customer’s shopping pattern is evaluated so that relevant products can be suggested to them.&lt;/p&gt;

&lt;p&gt;For example, if you’re looking to buy the Harry Potter Book series on Amazon, there is a possibility that you might also want to buy The Lord of the Rings or similar books that fall into the same genre. Therefore, Amazon recommends similar books to you.&lt;br&gt;
Henceforth, as you provide the engine more data, it gets better with its recommendations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Data Modelling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As mentioned earlier, Machine Learning is a part of Data Science and at this stage in our data cycle, Machine Learning is implemented. Machine Learning can also be a part of Data exploration or visualization if needed, but this stage is specifically for building a Machine learning model.&lt;/p&gt;

&lt;p&gt;In order to understand Data modelling, lets break down the process of Machine learning.&lt;/p&gt;

&lt;p&gt;Machine Learning is carried out in 5 distinctive stages:&lt;/p&gt;

&lt;p&gt;Importing Data&lt;br&gt;
Data Cleaning&lt;br&gt;
Creating a Model&lt;br&gt;
Model Training&lt;br&gt;
Model Testing&lt;br&gt;
Improve the accuracy of the model&lt;/p&gt;

&lt;p&gt;Importing Data: At this stage, the data that was gathered is imported for the machine learning process. The data must be in a readable format, such as a CSV file or a table.&lt;/p&gt;

&lt;p&gt;Data Cleaning: Data can have multiple duplicate values, missing values or N/A values. Such inconsistencies in the data can cause wrongful predictions and must be dealt with in this stage.&lt;/p&gt;

&lt;p&gt;Creating a Model: This stage involves splitting the data set into 2 sets, one for training and the other for testing. After which you must build the model by using the training dataset. The models are built using Machine Learning algorithms like Logistic Regression, Linear Regression, Random Forest, Support Vector Machine and so on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5443kwpjejlhwd6n2a40.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5443kwpjejlhwd6n2a40.jpg" alt="Machine Learning Process" width="768" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Model training: At this stage, the machine learning model is trained on the training data set. A large portion of the data set is used for training so that the model can learn to map the input to the output, on a set of varied values.&lt;/p&gt;

&lt;p&gt;Model Testing: After the model is trained, it is then evaluated by using the testing data set. At this stage, the model is fed new data points and it must predict the outcome by running the new data points on the Machine learning model that was built earlier.&lt;/p&gt;

&lt;p&gt;Improve the Model: After the model is evaluated using the testing data, its accuracy is calculated. There is n number of ways in which the model’s efficiency can be improved. Methods such as cross-validation are used to make the model more accurate.&lt;/p&gt;

&lt;p&gt;So, that was all about the Machine Learning process. Coming to the last stage of the data life cycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Deployment &amp;amp; Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The goal of this stage is to deploy the final model onto a production environment for final user acceptance. At this stage, users must validate the performance of the models and if there are any issues with the model then they must be fixed in this stage.&lt;/p&gt;

&lt;p&gt;Before I end this blog, I want to conclude that Data Science and Machine Learning are interconnected fields and since Machine Learning is a part of Data Science, there isn’t much comparison between them.&lt;/p&gt;

&lt;p&gt;Machine Learning aids Data Science by providing a set of algorithms for data exploration, data modeling, decision making, etc. On the other hand, Data Science binds together, a set of Machine Learning algorithms to predict the outcome.&lt;/p&gt;

&lt;p&gt;With this, we come to the end of this blog on Data Science vs Machine Learning. If you have any queries regarding this topic, please comment down below.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://www.edureka.co/blog" rel="noopener noreferrer"&gt;Edureka&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>datascientist</category>
    </item>
  </channel>
</rss>
