<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dave</title>
    <description>The latest articles on DEV Community by Dave (@dvwbr).</description>
    <link>https://dev.to/dvwbr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dvwbr"/>
    <language>en</language>
    <item>
      <title>Building a CI/CD Pipeline for Shoelace</title>
      <dc:creator>Dave</dc:creator>
      <pubDate>Wed, 02 May 2018 04:06:47 +0000</pubDate>
      <link>https://dev.to/dvwbr/building-a-cicd-pipeline-for-shoelace-4de7</link>
      <guid>https://dev.to/dvwbr/building-a-cicd-pipeline-for-shoelace-4de7</guid>
      <description>&lt;h1&gt;
  
  
  AWS Beanstalk and a CD Pipeline
&lt;/h1&gt;

&lt;p&gt;Over the past few months at Shoelace, we’ve slowly moved our cloud-hosted infrastructure from DigitalOcean to Amazon Web Services. While DigitalOcean served us well for a long time, we wanted a more feature-rich service that would simplify automated deployments and load balancing. Although there’s a lot of options from big-name companies for cloud-hosting, Amazon was the easy choice for us given our experience with the service in other areas such as S3, and my past experience with the platform.&lt;/p&gt;

&lt;h1&gt;
  
  
  Problem space
&lt;/h1&gt;

&lt;p&gt;We had a decent sized fleet of servers (droplets) hosted on Digital Ocean, each with our latest production code checked out and running in a PM2 managed instance of NodeJS. That meant, whenever we wanted to deploy a new version to production, we would have to manually SSH into six (!) different servers, pull the latest from Git, and then rebuild via Grunt and reload Node via PM2. While not a difficult or unmanageable amount of work, it definitely became tedious and wasteful when we would want to go to production multiple times per day. We didn't want to be held back by an archaic deployment process, so we desired something better.&lt;/p&gt;

&lt;h1&gt;
  
  
  Goal
&lt;/h1&gt;

&lt;p&gt;Primarily, we wanted to reduce the amount of work required to deploy. We wanted to eliminate any obstacle in the way of deploying easily, quickly and safely. Achieving that would allow us to be able to iterate faster, giving us great agility and response to market trends, API changes, and bug resolution.&lt;/p&gt;

&lt;p&gt;I had some prior experience with AWS, which influenced our decision as a company to move onto the platform (plus, some free credits afforded to us given our status as a startup didn't hurt). Knowing the flexibility that AWS Beanstalk granted out of the box, we also wanted to add load balancing with minimal cost - something that Digital Ocean didn't support at the time. We would also be able to scale our instances up and down according to traffic with AWS auto-scaling policies. So, with lots of benefits available out of the box, we decided to move forward.&lt;/p&gt;

&lt;h1&gt;
  
  
  Approach
&lt;/h1&gt;

&lt;p&gt;I first started by Dockerizing all of our projects. Docker in production afforded us peace of mind that the versions would be identical across each environment. Since our stack is built on NodeJS, I used a base image from Node on &lt;a href="https://hub.docker.com/_/node/"&gt;Docker Hub&lt;/a&gt;. Our Docker images are pretty simple, as their main job is to take a snapshot of the code base at the time the image is created. We use Docker run to execute &lt;code&gt;npm install&lt;/code&gt;, &lt;code&gt;grunt build&lt;/code&gt;, and then we use PM2 Docker as our default command.&lt;/p&gt;

&lt;p&gt;From there I manually set up AWS Beanstalk environments. I enabled load balancing and autoscaling with default triggers, and NAT gateway to filter outbound traffic through one IP address to simplify our interactions with the Facebook API, who uses a whitelist to validate requests. Instead of having to update the whitelist every time a new EC2 instance is spun up under our load balancer, the NAT gateway ensure a single IP address is sending requests to Facebook. I chose immutable deployments across our EC2 instances, to eliminate the chance of users interacting with a partially rolled out deployment.&lt;/p&gt;

&lt;p&gt;Next up I wrote a custom deploy script to tie together the continuous deployment flow. The script has a few main components to it. &lt;/p&gt;

&lt;p&gt;First, it builds the Docker image against the current directory, and pushes that image out to AWS Elastic Container Registry. I could have also used Docker Hub here, but given the rest of our cloud infrastructure was already on Amazon, keeping everything under the Amazon umbrella seemed simpler. &lt;/p&gt;

&lt;p&gt;Next, a ZIP file is created to store a Dockerrun.aws.json file, which Beanstalk uses to configure applications in an Elastic Beanstalk Docker environment, and some low-level Beanstalk configuration files to ensure that our network configuration is maintained across environment updates. The Dockerrun file is how Amazon will later know which Docker image to pull, in order to update our environment. This ZIP file is pushed to S3.&lt;/p&gt;

&lt;p&gt;Afterwards, a new Beanstalk version is created based on the ZIP file we just pushed to S3, and then we trigger an environment update using the newly created application version.&lt;/p&gt;

&lt;p&gt;To tie this all together, and to give our continuous integration and testing piece, we integrated with &lt;a href="https://circleci.com"&gt;CircleCI&lt;/a&gt;. Once our projects were added, I updated our CircleCI configuration to execute our deployment script to AWS once our production branch on Github successfully built (re: our linter and automated tests passed).&lt;/p&gt;

&lt;p&gt;And that's it! Now, whenever we cut a production branch on Github, our code is automatically tested an deployed to AWS. No more manual deployments.&lt;/p&gt;

</description>
      <category>ci</category>
      <category>cd</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Google Analytics with NodeJS</title>
      <dc:creator>Dave</dc:creator>
      <pubDate>Mon, 30 Apr 2018 01:24:17 +0000</pubDate>
      <link>https://dev.to/dvwbr/google-analytics-with-nodejs-9n2</link>
      <guid>https://dev.to/dvwbr/google-analytics-with-nodejs-9n2</guid>
      <description>&lt;h1&gt;
  
  
  Why Google Analytics?
&lt;/h1&gt;

&lt;p&gt;As part of our ongoing drive to automate expertise at Shoelace, we’ve decided to integrate with Google Analytics  and my focus lately has been to see this to fruition. Google Analytics, in our early stage integration, will play a vital role in helping us to better understand the size of a retargeting audience available to a particular store, and allow us to offer them better retargeting advertising campaigns.&lt;/p&gt;

&lt;h1&gt;
  
  
  Google Analytics and NodeJS
&lt;/h1&gt;

&lt;p&gt;Google has released an alpha version of their client library for NodeJS, which can be found &lt;a href="https://github.com/google/google-api-nodejs-client"&gt;on Github&lt;/a&gt;. One of the main advantages to using a client library supported by Google is that they offer automatic token refreshing, so you don’t have to put too much effort into ensuring the access token you’ve received is still valid.&lt;/p&gt;

&lt;p&gt;Another incredibly useful resource was Google Analytics’ &lt;a href="https://ga-dev-tools.appspot.com/query-explorer/"&gt;Query Explorer&lt;/a&gt;. I found it really helpful over the development process to be able to rely on this as a means of verifying the data I received from the API.&lt;/p&gt;

&lt;p&gt;As of &lt;a href="https://www.npmjs.com/package/googleapis"&gt;googleapis&lt;/a&gt; version 28, native async/await are supported, making calls to the GA API much cleaner to read and process.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting authorization
&lt;/h1&gt;

&lt;p&gt;We use OAuth2 to enable users to grant us permission to their GA account. Once authorized, we're able to store their access token and refresh token, and use this to authenticate requests for data in their account.&lt;/p&gt;

&lt;h1&gt;
  
  
  Authenticating and making requests
&lt;/h1&gt;

&lt;p&gt;When we prepare to make requests over the GA API, we use the OAuth2 client to set credentials - in this case, it's a simple Node object containing two keys, &lt;code&gt;access_token&lt;/code&gt; and &lt;code&gt;refresh_token&lt;/code&gt;, and the corresponding data. Here's an example of using an OAuth2 client to get an analytics API object that's ready to make requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const oauth2Client = new OAuth2(CLIENT_ID, CLIENT_SECRET, REDIRECT_URL);
const credentials = { refresh_token: 'REFRESH_TOKEN', access_token: 'ACCESS_TOKEN' };
oauth2Client.setCredentials(credentials);
const analyticsAPI = googleApi.analytics({ version: 'v3', auth: oauth2Client });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use &lt;code&gt;analyticsAPI&lt;/code&gt; now to make requests:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;await analyticsAPI.management.profiles.list({ accountId: '~all', webPropertyId: '~all' });&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above will fetch all profiles available for a particular user.&lt;/p&gt;

&lt;h1&gt;
  
  
  Shoelace and GA
&lt;/h1&gt;

&lt;p&gt;Right now at Shoelace, we're most interested in understanding retargeting audience sizes. So, when a user integrates GA into their Shoelace account, we also keep track of an appropriate view to query against (we highly recommend to our users that they select their default, unfiltered view, for best results). With that, we dynamically generate a segment to isolate traffic that's targeted towards their Shopify domain, and from there we can understand different metrics of users across a range of time spans.&lt;/p&gt;

&lt;p&gt;We're excited to continue to grow this integration and leverage a lot more data that's tracked by GA.&lt;/p&gt;

</description>
      <category>googleanalytics</category>
      <category>node</category>
      <category>shoelace</category>
    </item>
  </channel>
</rss>
