<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rangle.io </title>
    <description>The latest articles on DEV Community by Rangle.io  (@rangleio).</description>
    <link>https://dev.to/rangleio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rangleio"/>
    <language>en</language>
    <item>
      <title>Running Jenkins and Persisting state locally using Docker</title>
      <dc:creator>Rangle.io </dc:creator>
      <pubDate>Tue, 10 Dec 2019 17:44:55 +0000</pubDate>
      <link>https://dev.to/rangle/running-jenkins-and-persisting-state-locally-using-docker-2ndl</link>
      <guid>https://dev.to/rangle/running-jenkins-and-persisting-state-locally-using-docker-2ndl</guid>
      <description>&lt;p&gt;Jenkins is one of the most popular Continuous Integration and Delivery Servers today, so it's only natural that you're probably interested in learning more about it. When starting out, you'll need to first run it on your local machine. However, the problem with that is the Jenkins configuration files will then live directly on your machine. A better solution is to run it as a Docker container, here are some of the reasons why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  All of your Jenkins &lt;strong&gt;configuration files&lt;/strong&gt; live inside the container rather than the host machine. Knowing that all the files you need are inside the container, you can eliminate the issue of accidentally mixing your files with Jenkins configuration files.&lt;/li&gt;
&lt;li&gt;  Docker instances are easier to manage if you are interested in running Jenkins on &lt;em&gt;multiple platforms&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  You can easily create and destroy the Jenkins server and remove all the Jenkins data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another benefit of using containers is persisting the state of your Jenkins server using Docker volumes. Why do this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You get to keep all your projects and configurations even after restarting your computer (local machine)&lt;/li&gt;
&lt;li&gt;  You don't need to run the whole Jenkins setup again&lt;/li&gt;
&lt;li&gt;  You can remove your container instance and still able to recover the state of your Jenkins server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that in mind, I'll show you how you can start configuring Jenkins and persisting state on your local machine. Let's get started!&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins setup
&lt;/h3&gt;

&lt;p&gt;Before we get started, you'll need to install Docker on your machine. If you're not sure how you can refer to this &lt;a href="https://docs.docker.com/install/"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After installing Docker, download the latest stable Jenkins image by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image pull jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kYZoVFLx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AepLT8pmFrAT-JUfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kYZoVFLx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AepLT8pmFrAT-JUfb.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Persisting Jenkins Data
&lt;/h3&gt;

&lt;p&gt;You can create a volume by running the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create [YOUR VOLUME]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Volumes are used to make sure that you don't lose your Jenkins data. If you are using the &lt;code&gt;-v&lt;/code&gt; flag on container creation ( &lt;code&gt;docker container run&lt;/code&gt;), feel free to skip this step since Docker will automatically create the volume for you.&lt;/p&gt;

&lt;p&gt;Run the container by attaching the volume and assigning the targeted port. In this example, we'll also run it in detached mode. Here is the command to run your Docker container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run -d \ -p [YOUR PORT]:8080 \ -v [YOUR VOLUME]:/var/jenkins_home \ --name jenkins-local \ jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you were wondering what the arguments stand for, here is what each means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  -d: detached mode&lt;/li&gt;
&lt;li&gt;  -v: attach volume&lt;/li&gt;
&lt;li&gt;  -p: assign port target&lt;/li&gt;
&lt;li&gt;  -name: name of the container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And now, we're ready to take a look at an example of how you could run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run -d -p 8082:8080 \ -v jenkinsvol1:/var/jenkins_home \ --name jenkins-local \ jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After running the command, you should be able to see the code to be used for the next step on your setup.&lt;/p&gt;

&lt;p&gt;In the command we ran, &lt;code&gt;/var/jenkins_home&lt;/code&gt; is the path to where the Jenkins state is stored on our container instance. The most important argument we pass when it comes to data persistence in this example is the &lt;code&gt;-v **[YOUR VOLUME]**:/var/jenkins_home&lt;/code&gt;. This argument is what helps Docker link the volume to the file inside the container. To learn more about Docker volumes, you can check out the &lt;a href="https://docs.docker.com/storage/volumes/"&gt;official documentation&lt;/a&gt;. If you're not familiar with Docker, you can start with this helpful docker blog series, &lt;a href="https://rangle.io/blog/learning-docker-command-line-interface/"&gt;Learning Docker - The Command Line Interface&lt;/a&gt;. You'll notice that I am using port &lt;code&gt;8082&lt;/code&gt; instead of the default &lt;code&gt;8080&lt;/code&gt;. The reason, other than demonstrating that you can use other ports, is that port &lt;code&gt;8080&lt;/code&gt; is used by some web frameworks.&lt;/p&gt;

&lt;p&gt;If you run &lt;code&gt;docker ps&lt;/code&gt;, you should see your docker container running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--utHDOyok--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2APNBlv9bjSstxPS9L.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--utHDOyok--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2APNBlv9bjSstxPS9L.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After confirming that your container is running, go to &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; ( &lt;code&gt;localhost:8082&lt;/code&gt; on my example) on your browser and you should see this page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BEcZw5vP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2ALJ6aicrm6-krZ1GZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BEcZw5vP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2ALJ6aicrm6-krZ1GZ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a part of the Jenkins setup, we need to view the password inside the container instance. In order to do this, we need to use the &lt;code&gt;CONTAINER ID&lt;/code&gt; (or the name) and run &lt;code&gt;docker exec&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here is the full command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container exec \ [CONTAINER ID or NAME] \ sh -c "cat /var/jenkins_home/secrets/initialAdminPassword"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After running the command, you should see the code. Copy the code and paste it on the webpage to unlock Jenkins. After unlocking, click on &lt;strong&gt;Install suggested plugins&lt;/strong&gt; on the &lt;strong&gt;Customize Jenkins&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oFRTgLZ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AbD6MGyYUp_qWEtj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oFRTgLZ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AbD6MGyYUp_qWEtj8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait until the installation is complete and then you can proceed in creating your first admin user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---aEvasrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2ALLEhUtzeBjjnWwrK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---aEvasrA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2ALLEhUtzeBjjnWwrK.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the admin user, setup the Instance configuration. Since you are only using Jenkins locally, leave the URL to your &lt;code&gt;localhost&lt;/code&gt; URL. Click on &lt;strong&gt;Save and Finish&lt;/strong&gt; to start using Jenkins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j0UK6TsJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AhqVjL0JInO0d0M0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j0UK6TsJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AhqVjL0JInO0d0M0q.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Confirming Jenkins state is persisted
&lt;/h3&gt;

&lt;p&gt;Once you get to the Jenkins home page, create a new job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k4ItSEoI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Ad87mzGgB88gKhdo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k4ItSEoI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Ad87mzGgB88gKhdo9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the job &lt;strong&gt;Test&lt;/strong&gt; and set it to be a &lt;strong&gt;Freestyle project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cq9Ttn5u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Af-241x3RfgtU2mn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cq9Ttn5u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Af-241x3RfgtU2mn5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave everything as default and add a new &lt;strong&gt;Shell Execution&lt;/strong&gt; under &lt;strong&gt;Build.&lt;/strong&gt; Add this as the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Working"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Save the job and click on &lt;strong&gt;Build Now&lt;/strong&gt; to start running the job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Coks_EQ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Ag1rgJyIrwi1tBrg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Coks_EQ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2Ag1rgJyIrwi1tBrg2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Build Status (blue ball)&lt;/strong&gt; under &lt;strong&gt;Build History (left sidebar)&lt;/strong&gt; to view the console output. You should see that our command ran with no problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E1BMTzWz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2A47jvDSl-KxN77Xiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E1BMTzWz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2A47jvDSl-KxN77Xiu.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Recovering Jenkins
&lt;/h3&gt;

&lt;p&gt;After confirming that the job runs, we want to make sure that we can recover our &lt;strong&gt;Jenkins&lt;/strong&gt; configuration if needed. In Jenkins, all of the configuration is stored in &lt;code&gt;/var/jenkins_home/&lt;/code&gt; by default. Remember that we are using docker volume to store information about our &lt;strong&gt;Jenkins&lt;/strong&gt; instance.&lt;/p&gt;

&lt;p&gt;In order to check if the persistence works correctly, let's destroy our current docker container and see if we can log in as an admin and view our job build history.&lt;/p&gt;

&lt;p&gt;First, run &lt;code&gt;docker container kill [CONTAINER ID]&lt;/code&gt; to stop the instance. After doing so, run &lt;code&gt;docker container rm [CONTAINER ID]&lt;/code&gt; to completely remove the container instance.&lt;/p&gt;

&lt;p&gt;Visit &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; ( &lt;code&gt;localhost:8082&lt;/code&gt; on my example) to confirm that the Jenkins instance is not running anymore. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gWWMyoY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AzgmD5pfu9waTaIqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gWWMyoY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2AzgmD5pfu9waTaIqi.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the same command we used to create the container instance in order to recover the Jenkins instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run -d -p 8082:8080 \ -v jenkinsvol1:/var/jenkins_home \ --name jenkinslocal \ jenkins/jenkins:lts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should get a new &lt;code&gt;CONTAINER ID&lt;/code&gt; after running the command. Visit &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; ( &lt;code&gt;localhost:8082&lt;/code&gt; on my example). You should see the login page. Login with the admin credentials you set during Jenkins initialization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zVKvfnxA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2A-g9yFfi0ZV9IEjDr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zVKvfnxA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2A-g9yFfi0ZV9IEjDr.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After logging in, you should see the &lt;strong&gt;job&lt;/strong&gt; you created and view the console output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E_L9QNdv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2APSP8DwF9YXA5vAGN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E_L9QNdv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/0%2APSP8DwF9YXA5vAGN.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  In summary
&lt;/h3&gt;

&lt;p&gt;I hope you can now see why Docker is the best way to start learning or at least playing around with Jenkins. You can easily run a Jenkins instance as a Docker container and persist your Jenkins server state using Docker Volumes. In case you need to restart or recover your Jenkins instance, all of the state is stored inside the Docker Volume. If you want to read more about Jenkins, Docker and DevOps, check out our other blogs, &lt;a href="https://rangle.io/blog/tag/devops/"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>jenkins</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Docker for Frontend Devs: Custom Docker Images for Development</title>
      <dc:creator>Rangle.io </dc:creator>
      <pubDate>Fri, 06 Dec 2019 21:16:04 +0000</pubDate>
      <link>https://dev.to/rangle/docker-for-frontend-devs-custom-docker-images-for-development-1afc</link>
      <guid>https://dev.to/rangle/docker-for-frontend-devs-custom-docker-images-for-development-1afc</guid>
      <description>&lt;p&gt;By: &lt;a href="https://medium.com/@martindevnow"&gt;Benjamin Martin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's take a moment to consider what is important for local development. For me, I want to make sure all my developers are using the same dependencies, and I don't want to worry about what versions they have installed. No more "but it works on my machine" excuses. At the same time, I want to make sure we retain the conveniences of HMR (Hot Module Replacement) so that developers don't need to constantly refresh the application to see their changes reflected. We don't want to lose fast feedback.&lt;/p&gt;

&lt;p&gt;In this article, we'll look at how we can setup Docker for a boilerplate VueJS app with custom &lt;code&gt;Dockerfile&lt;/code&gt;s from which our images and containers will be built and how we gain efficiencies from these.&lt;/p&gt;

&lt;p&gt;In case you missed the first part in this series, &lt;a href="https://blog.rangle.io/learning-docker-command-line-interface/"&gt;check here to learn more about the command line interface&lt;/a&gt; that Docker ships with. We need to use the commands from that article in this section. If you are already familiar with Docker CLI, please continue to follow along.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite: Create our project
&lt;/h2&gt;

&lt;p&gt;This is of course a Docker article, so please ensure you have Docker installed. You can follow &lt;a href="https://docs.docker.com/install/"&gt;the official install instructions for Docker here&lt;/a&gt;. Since I'm using Vue, I've used the VueCLI to spin up a quick workspace with &lt;code&gt;vue create docker-demo&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The configuration I selected (seen below) will be relevant to do E2E testing and unit testing which will become part of our CI/CD pipeline.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1_9l7UV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/nhLaIrK.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1_9l7UV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/nhLaIrK.gif" alt="Vue CLI Create Project" title="Vue CLI Create Demo Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything is installed, &lt;code&gt;cd&lt;/code&gt; into our new project folder, open an IDE and let's dig in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Docker Image for Development
&lt;/h2&gt;

&lt;p&gt;If you've played with Docker but not built your own image, you probably know we specify an image when we execute our &lt;code&gt;docker run&lt;/code&gt; command. Those images are pulled from Docker Hub or some other remote repository (if that image is not found locally). In our case though, we want to build a custom image.&lt;/p&gt;

&lt;p&gt;In the root of our project, create a file named &lt;code&gt;Dockerfile.dev&lt;/code&gt;. This will be our development image. In that file, copy the following code into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Base Image
FROM node:9.11.1

ENV NODE_ENV=development
ENV PORT=8080

WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN cd /usr/src/app &amp;amp;&amp;amp; CI=true npm install

EXPOSE 8080
CMD ["npm", "run", "serve"]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Ok... but what does all this do? Let's dig into it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile Commands and Keywords
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;FROM&lt;/code&gt; specifies the preexisting image on which to build our custom image. Since we are running a node application, we've chosen one of their official Docker images.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;FROM node:9.11.1 means our application image will start with the node v 9.11.1 image&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;ENV&lt;/code&gt; sets environment variables&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;ENV PORT=8080&lt;/code&gt; sets the environment variable &lt;code&gt;PORT&lt;/code&gt; for later use&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ENV NODE_ENV=development&lt;/code&gt; sets the environment variable &lt;code&gt;NODE_ENV&lt;/code&gt; for use within our app&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;WORKDIR&lt;/code&gt; sets the working directory within the container&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;WORKDIR /usr/src/app&lt;/code&gt; defines &lt;code&gt;/usr/src/app/&lt;/code&gt; as our working directory within the docker image&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;COPY&lt;/code&gt; copies new files, directories or remote files into the container/image&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;COPY package*.json /usr/src/app/&lt;/code&gt; copies our &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;package-lock.json&lt;/code&gt; into our working directory&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;RUN&lt;/code&gt; executes a command in a new layer on top of the current image and commits it. When you run the build, you will see a hash representing each layer of our final image&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;RUN cd /usr/src/app/ &amp;amp;&amp;amp; CI=true npm install&lt;/code&gt; changes the working directory to where the &lt;code&gt;package.json&lt;/code&gt; is and installs all our dependencies to this folder within the image. This makes it so that the image holds frozen copies of the dependencies. Our Docker image, not our host machine, is responsible for our dependencies&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;EXPOSE&lt;/code&gt; allows us to access a port on the container from our host machine&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;EXPOSE 8080&lt;/code&gt; matches the port on which our app is running, inside the container and allows us to access our app from our host machine&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;CMD&lt;/code&gt; provides the default initialization command to run when our container is created, like a startup script&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;CMD ["npm", "run", "serve"]&lt;/code&gt; sets this as our default command when we start our container. This is not run when building the image, it only defines what command should be run when the container starts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I know you're anxious to get this running, but hold your horses. Let's look &lt;em&gt;closer&lt;/em&gt; at our &lt;code&gt;Dockerfile.dev&lt;/code&gt; and understand &lt;em&gt;why&lt;/em&gt; we did what we did.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile Structure Recommendations
&lt;/h3&gt;

&lt;p&gt;So, &lt;em&gt;Where's my app?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Right. We didn't use the &lt;code&gt;COPY&lt;/code&gt; command to copy our full workspace. Had we done so, we'd need to run &lt;code&gt;docker build&lt;/code&gt; and &lt;code&gt;docker run&lt;/code&gt; for every code change. We don't want to do this over and over for development. We can be more efficient&lt;/p&gt;

&lt;h4&gt;
  
  
  Caching Dependencies
&lt;/h4&gt;

&lt;p&gt;We are taking advantage of how Docker layers the images. As Docker builds our image, you'll see a hash for each layer as it is completed. What's more is that Docker also caches these layers. If Docker can see that nothing has changed on that layer from a previous build (and previous layers are also identical) then Docker will use a cached version of that layer, saving you and your developers precious time! When a layer changes, any cached layers on top of it are invalidated and will be rebuilt.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Therefore, if there is no change to our &lt;code&gt;package.json&lt;/code&gt; or the &lt;code&gt;package-lock.json&lt;/code&gt; then our entire image is cacheable and doesn't need to be rebuilt!&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Priority
&lt;/h4&gt;

&lt;p&gt;This is also why you want to have other &lt;code&gt;Dockerfile&lt;/code&gt; commands that change less frequently near the top of our file. As soon as one layer of our cache is invalidated, for example, if you change &lt;code&gt;ENV PORT=8080&lt;/code&gt; to another port, that cached layer and every cached layer after it is invalidated and Docker will have to rebuild those layers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building the Custom Docker Image
&lt;/h3&gt;

&lt;p&gt;Now, build the image with this command: &lt;code&gt;docker build --tag docker_demo:latest --file Dockerfile.dev .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eT9q_bFl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/CNXqCg5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eT9q_bFl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/CNXqCg5.gif" alt="Docker Build from custom Dockerfile" title="Using Docker to Build a Custom Dockerfile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using &lt;code&gt;--tag&lt;/code&gt; in the &lt;code&gt;docker build&lt;/code&gt; command allows us to easily reference this image from our &lt;code&gt;docker run&lt;/code&gt; command&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.&lt;/code&gt; at the end of the &lt;code&gt;docker build&lt;/code&gt; command references the context where our custom &lt;code&gt;Dockerfile&lt;/code&gt; can be found. So, this command should be run from the root of our project directory&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can run it with &lt;code&gt;docker run docker_demo:latest&lt;/code&gt;, but unfortunately, we have more work to do to get it working quickly and easily from the command line.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running our Container: Quality of Life Improvements
&lt;/h3&gt;

&lt;p&gt;We're going to be executing our &lt;code&gt;docker run&lt;/code&gt; command daily, if not more frequently. However, if we simply execute the &lt;code&gt;docker run docker_demo:latest&lt;/code&gt; command, Docker will create a &lt;em&gt;new&lt;/em&gt; container each time. Docker won't stop the old container unless you do so explicitly. This is very useful in many cases, but since we've hardcoded the host port, we'll run into port collisions on our host machine.&lt;/p&gt;

&lt;p&gt;In order for us to easily stop and remove our old containers, we should name them so we can easily refer to them later. Additionally, I want the running container to be removed if I cancel the running process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it\
--name docker_demo_container\
docker_demo:latest

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gf7Yqd7J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/1oHnXTH.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gf7Yqd7J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/1oHnXTH.gif" alt="Running the Docker Image we Built" title="Running a Docker container of a Custom Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What was added?
&lt;/h4&gt;

&lt;p&gt;We added a &lt;code&gt;--name&lt;/code&gt; field to the end of our run command. This allows us to reference the container without looking up the hash. Now, we can easily stop our container by name.&lt;/p&gt;

&lt;p&gt;We also added the &lt;code&gt;--rm&lt;/code&gt; and &lt;code&gt;-it&lt;/code&gt; flags to our &lt;code&gt;docker run&lt;/code&gt; command. The &lt;code&gt;--rm&lt;/code&gt; flag tells Docker to remove the container if and when it is stopped. The &lt;code&gt;-it&lt;/code&gt; flag keeps the terminal live and interactive once the container is started.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mounting Host Directories
&lt;/h4&gt;

&lt;p&gt;Let's go back to our &lt;code&gt;docker run&lt;/code&gt; command and let's find a way to mount our workspace directory to a folder within our container. We can do this by adding a mount point to our container in the &lt;code&gt;docker run&lt;/code&gt; command. This will tell Docker that we want to create an active link between our host machine's folder (&lt;code&gt;src&lt;/code&gt;) and the Docker container folder (&lt;code&gt;dst&lt;/code&gt;). Our new command should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it\
--name docker_demo_container\
--mount type=bind,src=`pwd`,dst=/usr/src/app\
docker_demo:latest

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But this could conflict with our host machine's &lt;code&gt;node_modules&lt;/code&gt; folder since we're mounting our entire &lt;code&gt;pwd&lt;/code&gt; to our app's location in the image (in case one of our developers accidentally runs &lt;code&gt;npm install&lt;/code&gt; on their host machine). So, let's add a volume to ensure we preserve the &lt;code&gt;node_modules&lt;/code&gt; that exists within our container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it\
--name docker_demo_container\
--mount type=bind,src=`pwd`,dst=/usr/src/app\
--volume /usr/src/app/node_modules\
docker_demo:latest

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Accessing Ports Inside the Container
&lt;/h4&gt;

&lt;p&gt;If you tried the above command (and you're running a VueJS app), you should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; App running at:
  - Local:   http://localhost:8080/

  It seems you are running Vue CLI inside a container.
  Access the dev server via http://localhost:&amp;lt;your container's external mapped port&amp;gt;/

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Docker is giving you a hint that we need to expose a port from our container and publish it on our host machine. We do this by adding the &lt;code&gt;--publish&lt;/code&gt; flag to our run command. (We already have the &lt;code&gt;EXPOSE&lt;/code&gt; command in our &lt;code&gt;Dockerfile.dev&lt;/code&gt;)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;--publish &amp;lt;host-port&amp;gt;:&amp;lt;container-port&amp;gt;&lt;/code&gt; tells Docker that traffic to the host machine (i.e. via localhost) on port &lt;code&gt;&amp;lt;host-port&amp;gt;&lt;/code&gt; should be directed towards the container at the &lt;code&gt;&amp;lt;container-port&amp;gt;&lt;/code&gt; that you define.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;docker run&lt;/code&gt; in One Command
&lt;/h3&gt;

&lt;p&gt;Let's take a look at our final run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it\
--name docker_demo_container\
--publish 4200:8080\
--mount type=bind,src=`pwd`,dst=/usr/src/app\
--volume /usr/src/app/node_modules\
docker_demo:latest

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HkeDxQCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/B4ROV1R.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HkeDxQCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/B4ROV1R.gif" alt="Running the Docker Image we Built Success" title="Successfully Running a Docker container of a Custom Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running the above command will finally allow us to access our app via &lt;a href="http://localhost:4200/"&gt;http://localhost:4200&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing it out
&lt;/h3&gt;

&lt;p&gt;Let's build a fresh copy and run it. If you try changing one of our file's templates, you'll see everything is still functioning as it should be.&lt;/p&gt;

&lt;p&gt;But speaking of testing, what about unit tests? Well, once our container is running, we can open a new terminal and &lt;code&gt;docker exec&lt;/code&gt; a command to run in our container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it docker_demo_container npm run test:unit

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p7zXf0vy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/2l7BDya.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p7zXf0vy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://i.imgur.com/2l7BDya.gif" alt="Running Unit Tests through Docker" title="Running Unit Tests in Docker Container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above command will create an interactive terminal connection with our container &lt;code&gt;docker_demo_container&lt;/code&gt; and execute the command &lt;code&gt;npm run test:unit&lt;/code&gt; in it, allowing us to run unit tests for our app.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Closing
&lt;/h2&gt;

&lt;p&gt;We now have a way to build our development images and run them locally while maintaining the conveniences of Hot Module Replacement to keep our development workflow efficient. Our developers don't need to worry about dependencies on their host machine colliding with those in the image. No more "but it works on my machine" excuses. And, we also have a command we can easily run to execute our unit tests.&lt;/p&gt;

&lt;p&gt;If you find anything I missed or want to chat more about Docker, please reach out to me!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>javascript</category>
      <category>vue</category>
    </item>
    <item>
      <title>Connecting Stripe events to the AWS EventBridge</title>
      <dc:creator>Rangle.io </dc:creator>
      <pubDate>Wed, 20 Nov 2019 15:28:39 +0000</pubDate>
      <link>https://dev.to/rangle/connecting-stripe-events-to-the-aws-eventbridge-3o58</link>
      <guid>https://dev.to/rangle/connecting-stripe-events-to-the-aws-eventbridge-3o58</guid>
      <description>&lt;p&gt;&lt;a href="https://stripe.com"&gt;Stripe&lt;/a&gt; is a great platform for running an online business, especially on account of the developer-centric API that makes it easy to collect payments,  set up subscriptions and more.&lt;/p&gt;

&lt;p&gt;Many of these APIs result in Stripe generating one or more events that you can subscribe to via a webhook. These events could represent a successful payment, or perhaps a new subscriber to your SaaS platform. In total, there are well over 100 different types of events that you can subscribe to, so how can we build a loosely-coupled cloud-native event-driven architecture to handle these events? How can we make this serverless?&lt;/p&gt;

&lt;p&gt;It's easy, we'll use AWS EventBridge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why AWS EventBridge?
&lt;/h3&gt;

&lt;p&gt;For those of you just learning about AWS EventBridge, it is a serverless event bus supporting event-driven architectures, released July 2019. An event bus is a central location where business events can be published and routing rules can be configured to send those events to downstream functions or services that operate on them. By using an event bus as an intermediate layer, this decouples services from needing direct knowledge of how to communicate with each other, which allows development teams to build and operate independently.&lt;/p&gt;

&lt;p&gt;One reason, among many, why AWS EventBridge should be considered is the cost. With AWS EventBridge, you have a fully managed cloud-native event bus, requiring no effort to set up and no servers to pay for - and it costs a very reasonable $1 per million events. Latency is typically about half a second. By going with a fully managed solution, you're saving your team from spending time on 'undifferentiated heavy lifting' associated with infrastructure, giving them more time to focus on delivering customer value.&lt;/p&gt;

&lt;h3&gt;
  
  
  The solution
&lt;/h3&gt;

&lt;p&gt;We'll show here how we can set up AWS EventBridge so that it can begin ingesting events from Stripe. As part of that ingestion, we'll check that the event is truly from Stripe by verifying the signature on the event. Let's get started. &lt;/p&gt;

&lt;p&gt;Using an an open-source project we created called &lt;a href="&amp;lt;https://github.com/rangle/stripe-eventbridge&amp;gt;"&gt;stripe-eventbridge&lt;/a&gt;,  you can quickly  set up the plumbing to connect Stripe events to AWS EventBridge via a Lambda function that will then validate the authenticity of incoming events (so that downstream functions won't have to). This allows for the events that are published on the event bridge to be routed to various downstream functions through simple routing rules which we show at the end of the post.&lt;/p&gt;

&lt;p&gt;With this deployed, you can set up many downstream Lambda functions to handle the wide array of events generated by Stripe and configure the routing to these event handlers with AWS EventBridge based on the Stripe event type. All of this without having to worry about event signatures, since events are only placed on the EventBridge if they pass validation.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/rangle/stripe-eventbridge"&gt;stripe-eventbridge&lt;/a&gt; project creates an endpoint that you configure in the Stripe Dashboard as the destination for the webhook events. When an event arrives, the endpoint will invoke a Lambda that we provide which has these responsibilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;to validate that the event has a valid signature (i.e. that it was generated by Stripe)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to place validated events (only) on the AWS EventBridge&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to notify an SNS topic if an event fails to validate (e.g. due to an invalid signature)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Solution overview&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rangle/stripe-eventbridge"&gt;stripe-eventbridge&lt;/a&gt; uses the&lt;a href="https://serverless.com/"&gt;  Serverless Framework&lt;/a&gt; to generate the infrastructure in the AWS section of the diagram above. The&lt;a href="https://serverless.com/"&gt;  Serverless Framework&lt;/a&gt; allows us to automatically create and deploy the CloudFormation stacks needed to realize the above configuration. The stacks can be deployed via sls deploy and rolled back via sls remove using the command line client.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless components in stripe-eventbridge
&lt;/h3&gt;

&lt;p&gt;Let's walk through what &lt;a href="https://github.com/rangle/stripe-eventbridge"&gt;stripe-eventbridge&lt;/a&gt; creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Lambda - this function validates incoming events, and if they are from Stripe (i.e. signed correctly) then they are relayed to the AWS EventBridge where downstream services can read from them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An API Gateway endpoint - this is the webhook endpoint that the Lambda above is listening to for new events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An AWS SecretManager secret - this secret is created with an empty value, and you populate this with the signing key assigned to your webhook endpoint in the Stripe Dashboard&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An SNS topic - if the webhook fails validation or processing, then a notification is sent to a topic called stripe-webhook-event-failed-to-validate. You can configure subscriptions to this topic, so that you are emailed (for example) whenever an event fails processing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you'd like to get started, you can visit the GitHub project here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rangle/stripe-eventbridge"&gt;https://github.com/rangle/stripe-eventbridge&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Routing by event type
&lt;/h3&gt;

&lt;p&gt;Now that you have events being published to the EventBridge, you can configure routing based on the event type using Rule patterns like this. The detail-type values are exactly the values of the underlying Stripe events - you can see a full list here:&lt;a href="https://stripe.com/docs/api/events/types"&gt;  https://stripe.com/docs/api/events/types&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  "detail-type": [

    "payment_intent.succeeded"

  ],

  "source": [

    "Stripe"

  ]

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternately, if you are creating your event handlers using the Serverless Framework, then you can declare the same routing in your infrastructure code (this is really good practice) in your serverless.yml as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    handler: handler.myLambdaFunction

    events:

      - eventBridge:

          pattern:

            source:

              - Stripe

            detail-type:

              - payment_intent.succeeded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  And that's all!
&lt;/h3&gt;

&lt;p&gt;I hope you have found this valuable. Stay tuned, we'll be exploring more with Serverless in upcoming posts.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>eventbridge</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Running Jenkins and Persisting state locally using Docker</title>
      <dc:creator>Rangle.io </dc:creator>
      <pubDate>Fri, 15 Nov 2019 20:42:15 +0000</pubDate>
      <link>https://dev.to/rangle/running-jenkins-and-persisting-state-locally-using-docker-49d7</link>
      <guid>https://dev.to/rangle/running-jenkins-and-persisting-state-locally-using-docker-49d7</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2Fjenkins_16_9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2Fjenkins_16_9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins is one of the most popular Continuous Integration and Delivery Servers today, so it's only natural that you're probably interested in learning more about it. When starting out, you'll need to first run it on your local machine. However, the problem with that is the Jenkins configuration files will then live directly on your machine. A better solution is to run it as a Docker container, here are some of the reasons why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  All of your Jenkins &lt;strong&gt;configuration files&lt;/strong&gt; live inside the container rather than the host machine. Knowing that all the files you need are inside the container, you can eliminate the issue of accidentally mixing your files with Jenkins configuration files.&lt;/li&gt;
&lt;li&gt;  Docker instances are easier to manage if you are interested in running Jenkins on &lt;em&gt;multiple platforms&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  You can easily create and destroy the Jenkins server and remove all the Jenkins data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another benefit of using containers is persisting the state of your Jenkins server using Docker volumes. Why do this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You get to keep all your projects and configurations even after restarting your computer (local machine)&lt;/li&gt;
&lt;li&gt;  You don't need to run the whole Jenkins setup again&lt;/li&gt;
&lt;li&gt;  You can remove your container instance and still able to recover the state of your Jenkins server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that in mind, I'll show you how you can start configuring Jenkins and persisting state on your local machine. Let's get started!&lt;/p&gt;

&lt;h1&gt;
  
  
  Jenkins setup
&lt;/h1&gt;

&lt;p&gt;Before we get started, you'll need to install Docker on your machine. If you're not sure how you can refer to this &lt;a href="https://docs.docker.com/install/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After installing Docker, download the latest stable Jenkins image by running:&lt;/p&gt;

&lt;p&gt;You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.10.10-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.10.10-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Persisting Jenkins Data
&lt;/h2&gt;

&lt;p&gt;You can create a volume by running the command below:&lt;br&gt;
&lt;code&gt;docker volume create [YOUR VOLUME]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Volumes are used to make sure that you don't lose your Jenkins data. If you are using the &lt;code&gt;-v&lt;/code&gt; flag on container creation (&lt;code&gt;docker container run&lt;/code&gt;), feel free to skip this step since Docker will automatically create the volume for you.&lt;/p&gt;

&lt;p&gt;Run the container by attaching the volume and assigning the targeted port. In this example, we'll also run it in detached mode. Here is the command to run your Docker container:&lt;br&gt;
&lt;code&gt;docker container run -d \ -p [YOUR PORT]:8080 \ -v [YOUR VOLUME]:/var/jenkins_home \ --name jenkins-local \ jenkins/jenkins:lts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you were wondering what the arguments stand for, here is what each means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  -d: detached mode&lt;/li&gt;
&lt;li&gt;  -v: attach volume&lt;/li&gt;
&lt;li&gt;  -p: assign port target&lt;/li&gt;
&lt;li&gt;  ---name: name of the container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And now, we're ready to take a look at an example of how you could run this command:&lt;br&gt;
&lt;code&gt;docker container run -d -p 8082:8080 \ -v jenkinsvol1:/var/jenkins_home \ --name jenkins-local \ jenkins/jenkins:lts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After running the command, you should be able to see the code to be used for the next step on your setup.&lt;/p&gt;

&lt;p&gt;In the command we ran, &lt;code&gt;/var/jenkins_home&lt;/code&gt; is the path to where the Jenkins state is stored on our container instance. The most important argument we pass when it comes to data persistence in this example is the &lt;code&gt;-v **[YOUR VOLUME]**:/var/jenkins_home&lt;/code&gt;. This argument is what helps Docker link the volume to the file inside the container. To learn more about Docker volumes, you can check out the &lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. If you're not familiar with Docker, you can start with this helpful docker blog series, &lt;a href="https://rangle.io/blog/learning-docker-command-line-interface/" rel="noopener noreferrer"&gt;Learning Docker - The Command Line Interface&lt;/a&gt;. You'll notice that I am using port &lt;code&gt;8082&lt;/code&gt; instead of the default &lt;code&gt;8080&lt;/code&gt;. The reason, other than demonstrating that you can use other ports, is that port &lt;code&gt;8080&lt;/code&gt; is used by some web frameworks.&lt;/p&gt;

&lt;p&gt;If you run &lt;code&gt;docker ps&lt;/code&gt;, you should see your docker container running&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.15.00-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.15.00-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After confirming that your container is running, go to &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; (&lt;code&gt;localhost:8082&lt;/code&gt; on my example) on your browser and you should see this page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.15.58-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.15.58-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a part of the Jenkins setup, we need to view the password inside the container instance. In order to do this, we need to use the &lt;code&gt;CONTAINER ID&lt;/code&gt; (or the name) and run &lt;code&gt;docker exec&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here is the full command:&lt;br&gt;
&lt;code&gt;docker container exec \ [CONTAINER ID or NAME] \ sh -c "cat /var/jenkins_home/secrets/initialAdminPassword"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After running the command, you should see the code. Copy the code and paste it on the webpage to unlock Jenkins. After unlocking, click on &lt;strong&gt;Install suggested plugins&lt;/strong&gt; on the &lt;strong&gt;Customize Jenkins&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.16.38-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.16.38-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait until the installation is complete and then you can proceed in creating your first admin user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.17.14-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.17.14-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the admin user, setup the Instance configuration. Since you are only using Jenkins locally, leave the URL to your &lt;code&gt;localhost&lt;/code&gt; URL. Click on &lt;strong&gt;Save and Finish&lt;/strong&gt; to start using Jenkins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.17.49-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.17.49-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Confirming Jenkins state is persisted
&lt;/h2&gt;

&lt;p&gt;Once you get to the Jenkins home page, create a new job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.18.27-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.18.27-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the job &lt;strong&gt;Test&lt;/strong&gt; and set it to be a &lt;strong&gt;Freestyle project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.18.55-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.18.55-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave everything as default and add a new &lt;strong&gt;Shell Execution&lt;/strong&gt; under &lt;strong&gt;Build.&lt;/strong&gt; Add this as the command:&lt;br&gt;
&lt;code&gt;echo "Working"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the job and click on &lt;strong&gt;Build Now&lt;/strong&gt; to start running the job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.19.45-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.19.45-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Build Status (blue ball)&lt;/strong&gt; under &lt;strong&gt;Build History (left sidebar)&lt;/strong&gt; to view the console output. You should see that our command ran with no problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.20.26-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.20.26-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recovering Jenkins
&lt;/h2&gt;

&lt;p&gt;After confirming that the job runs, we want to make sure that we can recover our &lt;strong&gt;Jenkins&lt;/strong&gt; configuration if needed. In Jenkins, all of the configuration is stored in &lt;code&gt;/var/jenkins_home/&lt;/code&gt; by default. Remember that we are using docker volume to store information about our &lt;strong&gt;Jenkins&lt;/strong&gt; instance.&lt;/p&gt;

&lt;p&gt;In order to check if the persistence works correctly, let's destroy our current docker container and see if we can log in as an admin and view our job build history.&lt;/p&gt;

&lt;p&gt;First, run &lt;code&gt;docker container kill [CONTAINER ID]&lt;/code&gt; to stop the instance. After doing so, run &lt;code&gt;docker container rm [CONTAINER ID]&lt;/code&gt; to completely remove the container instance.&lt;/p&gt;

&lt;p&gt;Visit &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; (&lt;code&gt;localhost:8082&lt;/code&gt; on my example) to confirm that the Jenkins instance is not running anymore. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.20.57-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.20.57-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the same command we used to create the container instance in order to recover the Jenkins instance.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run -d -p 8082:8080 \ -v jenkinsvol1:/var/jenkins_home \ --name jenkinslocal \ jenkins/jenkins:lts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should get a new &lt;code&gt;CONTAINER ID&lt;/code&gt; after running the command. Visit  &lt;code&gt;localhost:[YOUR PORT]&lt;/code&gt; (&lt;code&gt;localhost:8082&lt;/code&gt; on my example). You should see the login page. Login with the admin credentials you set during Jenkins initialization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.21.33-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.21.33-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After logging in, you should see the &lt;strong&gt;job&lt;/strong&gt; you created and view the console output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.22.13-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frangleio.ghost.io%2Fcontent%2Fimages%2F2019%2F11%2FScreen-Shot-2019-11-14-at-1.22.13-PM.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary
&lt;/h2&gt;

&lt;p&gt;I hope you can now see why Docker is the best way to start learning or at least playing around with Jenkins. You can easily run a Jenkins instance as a Docker container and persist your Jenkins server state using Docker Volumes. In case you need to restart or recover your Jenkins instance, all of the state is stored inside the Docker Volume. If you want to read more about Jenkins, Docker and DevOps, check out our other blogs, &lt;a href="https://rangle.io/blog/tag/devops/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>ci</category>
      <category>webdev</category>
    </item>
    <item>
      <title>CD with Docker and Feature Branch Testing
</title>
      <dc:creator>Rangle.io </dc:creator>
      <pubDate>Fri, 25 Oct 2019 17:28:06 +0000</pubDate>
      <link>https://dev.to/rangle/cd-with-docker-and-feature-branch-testing-110h</link>
      <guid>https://dev.to/rangle/cd-with-docker-and-feature-branch-testing-110h</guid>
      <description>&lt;p&gt;&lt;a href="https://medium.com/@martindevnow" rel="noopener noreferrer"&gt;By: Ben Martin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've been following along with my Docker series (you can find &lt;a href="https://rangle.io/blog/docker-full-circle-continuous-integration-ci-with-cypress" rel="noopener noreferrer"&gt;my latest article about Continuous Integration (CI) here&lt;/a&gt;) then you must be pretty happy to have your CI pipeline solving &lt;strong&gt;all&lt;/strong&gt; the world's problems. Your developers are pretty content, but we know there's more we could do. And, I mean, isn't developer happiness the &lt;em&gt;real&lt;/em&gt; reason you're reading a DevOps article?&lt;/p&gt;

&lt;p&gt;In this article, I'll outline how you can take the CI pipeline one step further to address the Continuous Deployment (CD) aspect of CICD. More specifically, we'll address how one would configure &lt;strong&gt;feature branch specific&lt;/strong&gt; deployments of our app in order to quickly and easily manually test their features, even on mobile devices, before merging their feature branch PR to the develop branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite Knowledge
&lt;/h2&gt;

&lt;p&gt;If you haven't been following along with &lt;a href="https://rangle.io/blog/author/ben-martin" rel="noopener noreferrer"&gt;my previous articles&lt;/a&gt;, to get the most out of this article, you should first have a Dockerized application. If you also have a CircleCI pipeline configured, that's a huge head start! We will be building off that. In our hypothetical situation, we're using an AWS EC2 instance from the free tier in AWS. If you're new to AWS, this could easily be replaced by other technologies. One would only need a server to deploy to. In the past, I used a $5 Digital Ocean droplet to accomplish the same effect, but it's up to you to determine where you want to deploy.&lt;/p&gt;

&lt;p&gt;Here's what you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS Account with an EC2 instance configured to expose ports 22, 80 and 443 and an SSH key for CircleCI to use&lt;/li&gt;
&lt;li&gt;Source code of Dockerized App hosted on GitHub&lt;/li&gt;
&lt;li&gt;Docker Hub (or some other Docker image repository you can publish to)&lt;/li&gt;
&lt;li&gt;CircleCI connected to your app's GitHub repo and your Docker Hub repo&lt;/li&gt;
&lt;li&gt;Domain name pointing to your server&lt;/li&gt;
&lt;li&gt;Wildcard TLS Certificate for the above domain name&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Details for most of the above, aside from anything deployment related, can be found in &lt;a href="https://rangle.io/blog/docker-full-circle-continuous-integration-ci-with-cypress" rel="noopener noreferrer"&gt;my previous article.&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Goal: Fast Feedback
&lt;/h2&gt;

&lt;p&gt;As a huge proponent of Agile development methodologies, fast feedback is critical for success, and the higher fidelity the feedback, the better! Unit tests and end-to-end tests all have their place. But, having human eyes review UI changes running in a production-like environment in the browser helps us identify bugs before they become an issue that could impact other teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why should I bother?
&lt;/h3&gt;

&lt;p&gt;In my case, I was working on a project recently where I was developing an Angular application to be consumed through a mobile device. The app required use of the mobile device's camera and accelerometer for the features I was developing. Thus, I wanted to be able to quickly test these features with my actual phone while I was still working out the implementation details.&lt;/p&gt;

&lt;p&gt;Sounds simple enough, I just needed to deploy my code somewhere I could access from my device. But this process needed to be automated.&lt;/p&gt;

&lt;p&gt;I decided to leverage my existing CI pipeline and extend it to do Continuous Deployments (CD). My CI pipeline was already configured to run my unit tests and end-to-end tests, but I wanted to also deploy my code to a server that I could access from my phone for manual testing.&lt;/p&gt;

&lt;p&gt;Here are some key considerations for extending this CI/CD pipeline. &lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Branch Environments
&lt;/h2&gt;

&lt;p&gt;If you've worked on an enterprise webapp, you may be familiar with terms like &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;int(egration)&lt;/code&gt; and &lt;code&gt;staging&lt;/code&gt;. Typically, these are environments where different states of our application reside. Each of these environments are meant to host various states of your app, whether that be the branch actively under development, your next release or the last release that you're still supporting with bug fixes.&lt;/p&gt;

&lt;p&gt;What often gets lost in the mix is our &lt;code&gt;feature&lt;/code&gt; branches. When our developers pick up a ticket, they may want to actively test their code on a suite of devices to ensure cross device and cross browser compatibility. Plus, having this requirement for our developers acts as an additional safeguard for preventing bugs from getting into the &lt;code&gt;develop&lt;/code&gt; branch of our code.&lt;/p&gt;

&lt;p&gt;If you have ever asked your boss for &lt;em&gt;branch specific environments&lt;/em&gt;, you may well have been scoffed at claiming &lt;em&gt;"there's no room in the budget"&lt;/em&gt;. Well, what if I told you that through the power of Docker and the free tier on AWS (or an existing server you may have running), you can have branch-specific environments beyond &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;int&lt;/code&gt; and &lt;code&gt;staging&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker To The Rescue!
&lt;/h2&gt;

&lt;p&gt;Docker is the perfect tool for solving this issue for several reasons. In order to see why, consider what would be difficult about running multiple copies of our app on the same environment. Well, even if we imagine a static site there's a lot to cover off. When we deploy our app, what folder do we deploy it to? How do we handle port collisions? Asset loading with relative URLs? How do we notify nginx of the new route/site? There's a lot of configuration and mess that goes into doing it directly on the host machine that we can delegate to Docker to handle for us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FtUPdXRo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FtUPdXRo.png" alt="Docker Setup on AWS with Nginx"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, we'll have a long lived reverse nginx proxy running in a container on our host machine. This container will listen to Docker for other containers we run. Using environment variables in the &lt;code&gt;docker run&lt;/code&gt; command, we will be able to tell this nginx proxy to update the configuration to point dynamic subdomains to our feature branch specific containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker with nginx
&lt;/h3&gt;

&lt;p&gt;Most of the heavy lifting is done by the nginx proxy container. I found &lt;a href="https://github.com/jwilder/nginx-proxy" rel="noopener noreferrer"&gt;an excellent dockerized nginx proxy on GitHub&lt;/a&gt; to aid in this process. What is useful about this Docker image is that once you have it running, it will listen to your other &lt;code&gt;docker&lt;/code&gt; commands that are run on the host machine, in particular your &lt;code&gt;docker run&lt;/code&gt; command. In particular, it will look for environment variables attached to your &lt;code&gt;docker run&lt;/code&gt; command to interpret how to update the internal configuration and point it to your app container that is being started. &lt;/p&gt;

&lt;p&gt;This reverse proxy image should be running in a container on your server. This container's role is to listen to other Docker commands that you run and automatically update the nginx configuration within this reverse proxy container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Passing Inputs to your Docker Run
&lt;/h2&gt;

&lt;p&gt;Once you have the nginx proxy running in a Docker container, you can leverage your CI pipeline (like CircleCI) to get the environment variables needed to pass to your server in the &lt;code&gt;docker run&lt;/code&gt; command. Just make sure that CircleCI is authorized to access your server via SSH. A secure way to do this is using SSH keys. Generate a key and &lt;a href="https://circleci.com/docs/2.0/add-ssh-key/" rel="noopener noreferrer"&gt;provide the private key to CircleCI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then, add the public key to the &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; of your server. From there, make a bash script to manage your branch-specific containers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://circleci.com/docs/2.0/env-vars/" rel="noopener noreferrer"&gt;CircleCI exposes a number of environment variables&lt;/a&gt; to your pipeline that you can use in your jobs to pass to your bash script.&lt;/p&gt;

&lt;p&gt;In my case, I used CircleCI to run &lt;code&gt;scp&lt;/code&gt; to copy a deployment script to the testing server. Once the bash file is there, it would be executed with the arguments to tell it what sub-domain to use for this container. I used a sanitized version of the branch name. For example, &lt;code&gt;feature/ABC-42-my-feature&lt;/code&gt; becomes &lt;code&gt;feature_abc-42-my-feature.myexampledomain.com&lt;/code&gt; That subdomain would also be the alias for the container, allowing my script to stop a running container with outdated code when new commits are made to that particular branch.&lt;/p&gt;

&lt;p&gt;Here's an example of the run command from my deployment script that provides the environment variables (&lt;code&gt;VIRTUAL_HOST&lt;/code&gt;, &lt;code&gt;VIRTUAL_PROTO&lt;/code&gt; and &lt;code&gt;VIRTUAL_PORT&lt;/code&gt;) expected by the reverse nginx proxy container. The nginx container is listening to the Docker process on your host machine. When it sees a &lt;code&gt;docker run&lt;/code&gt; command, it is also looking for those three environment variables. If it sees them in the command, it will update the nginx configuration to point the virtual host to the container that is being run. This is the command that will be run on the host machine to deploy. These environment variables are defined in the CI pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --expose 443 -e VIRTUAL_HOST=${URL_SUBDOMAIN}.myexampledomain.com -e VIRTUAL_PROTO=https -e VIRTUAL_PORT=443 -d --rm=true --name ${CONTAINER_NAME} ${DOCKER_IMAGE}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The environment variables I am using here are set inside my bash script to ensure I pull the correct image, apply the right name to the container and assign the URL, port and protocol. Putting it together, we can add a "job" to our CircleCI configuration aliased as &lt;code&gt;deploy&lt;/code&gt;. This deploy job would look something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp deploy-${IMAGE_NAME}.sh ${PROD_SERVER_USER}@${PROD_SERVER_HOST}:/root/
ssh -o StrictHostKeyChecking=no ${PROD_SERVER_USER}@${PROD_SERVER_HOST} "/bin/bash /root/deploy-${IMAGE_NAME}.sh $IMAGE_NAME:$TAG ${URL_SUBDOMAIN}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: &lt;code&gt;PROD_SERVER_USER&lt;/code&gt; and &lt;code&gt;PROD_SERVER_HOST&lt;/code&gt; are set in the &lt;code&gt;.circleci/config.yml&lt;/code&gt; file&lt;/p&gt;

&lt;p&gt;Note: &lt;code&gt;$IMAGE_NAME:$TAG&lt;/code&gt; and  &lt;code&gt;${URL_SUBDOMAIN}&lt;/code&gt; are passed as arguments to the deploy script and are used in our &lt;code&gt;docker run&lt;/code&gt; command above.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the bash file, it is important to make sure there isn't already a container running with the same name. It should first be stopped and removed. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: In my case, because I was using HTTPS in order to gain access to the mobile device's camera, I needed to setup TLS with a wildcard domain so that I could add subdomains on the fly and still have TLS support.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Now, anytime this repository receives updates to one of the feature branches, that code (assuming it passes all the steps defined in our CICD pipeline prior to deployment) will be deployed to our testing server, on a custom subdomain.&lt;/p&gt;

&lt;p&gt;In my project, I took the branch name, sanitized it by removing any special characters, and used that as my subdomain. So, when I push commits to my branch called &lt;code&gt;feature/mobile-view&lt;/code&gt;, I could see my changes by visiting this subdomain: &lt;a href="https://feature_mobile-view.myexampledomain.com" rel="noopener noreferrer"&gt;https://feature_mobile-view.myexampledomain.com&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The value here being that in order for your developers to see code they're working on in a production-like environment, they only need to push their feature branch to the remote repository. Even if their code is incomplete, they can quickly prototype and test features that would be otherwise difficult to do so in a local environment, like mobile device features, etc. &lt;/p&gt;

&lt;h2&gt;
  
  
  Further Improvements
&lt;/h2&gt;

&lt;p&gt;We got what we wanted, but that doesn't mean it's perfect. There are many ways we could continue to polish this solution. Let's look at a few.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTPS
&lt;/h3&gt;

&lt;p&gt;Although we didn't dig into it here, it is convenient to add a wildcard TLS certificate to the server you are running. You can also look at &lt;a href="https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion" rel="noopener noreferrer"&gt;https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion&lt;/a&gt; to see how people have combined a separate container to automatically generate the certificates for the Docker containers being spun up. Similar to the image we used for the proxy, this one would also listen to your &lt;code&gt;docker run&lt;/code&gt; commands for environment variables to automate this process.&lt;/p&gt;

&lt;p&gt;In the project that inspired this setup, I needed HTTPS support to access the device's camera and accelerometer, so, given that this was a fun little side project, manually uploading a single wildcard certificate was sufficient for me.&lt;/p&gt;

&lt;p&gt;Another alternative is to put this solution behind an application load balancer (ALB) through AWS. Do pay attention to which services are compatible with the free tier of new AWS accounts. Otherwise be warned that there will be costs involved in fleshing out the solution further.&lt;/p&gt;

&lt;h3&gt;
  
  
  nginx Restarting
&lt;/h3&gt;

&lt;p&gt;In case the nginx proxy container ever crashed, you'd want to define what your restart policy is. A simple addition, is to add &lt;code&gt;--restart unless-stopped&lt;/code&gt; to the &lt;code&gt;docker run&lt;/code&gt; command for your nginx reverse proxy. But you also need to think about if the server restarts. There are many tools that one could use to manage starting containers on boot. Even just systemd to turn it into a service would meet these requirements. &lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Script
&lt;/h3&gt;

&lt;p&gt;This was something that resided in the code repository so that CircleCI would have access to it when it pulled the code from the repo. Alternatively, this could also reside on the deployment server instead. There will be pros and cons regardless. Discuss with your team which approach works best for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS EC2 Configuration
&lt;/h3&gt;

&lt;p&gt;There is a lot that could be said here from security to resource management and configuration automation. Some things you want to remember is always start with the fewest permissions and add more as needed. If you're using EC2, leverage the user script for your AWS instance to ensure your server is fully configured and has all the services running to ensure the nginx-proxy stays online. Write cron scripts to automatically remove containers that have been online for longer than "X" days. If there is a new commit to a particular branch, that container would be refreshed. Old containers would likely represent feature branches that have been merged. &lt;/p&gt;

&lt;p&gt;Speaking of which...&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Webhooks
&lt;/h3&gt;

&lt;p&gt;Configuring webhooks to ping your server when a branch gets merged or deleted would be another way to ensure you don't have too many inactive containers on your server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hopefully this gives you an idea of what needs to be considered when updating your CICD pipeline to support feature branch specific environments and how easy it can be. Just remember the goal: fast feedback. &lt;/p&gt;

&lt;p&gt;I am confident that if you implement a solution similar to that described above, your developers will let you know how much they love it, ranting and raving about how great it is. Just be careful. When they eventually move to a new project, they might just realize how much you've spoiled them ;)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>aws</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
