<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robert Reiz</title>
    <description>The latest articles on DEV Community by Robert Reiz (@reiz).</description>
    <link>https://dev.to/reiz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/reiz"/>
    <language>en</language>
    <item>
      <title>Deploy Rails static assets to CloudFront CDN - during Docker build time</title>
      <dc:creator>Robert Reiz</dc:creator>
      <pubDate>Sun, 10 Jan 2021 10:56:57 +0000</pubDate>
      <link>https://dev.to/reiz/deploy-rails-static-assets-to-cloudfront-cdn-during-docker-build-time-2k1f</link>
      <guid>https://dev.to/reiz/deploy-rails-static-assets-to-cloudfront-cdn-during-docker-build-time-2k1f</guid>
      <description>&lt;p&gt;&lt;a href="https://rubyonrails.org/"&gt;Ruby on Rails&lt;/a&gt; is a great Framework to build modern web applications. By default, all the static assets like CSS, JavaScript, and images, are served directly from the Ruby server. That works fine but doesn't offer the best performance. A ruby server like Puma or Unicorn is not optimized to serve static assets. A better choice would be to server the static assets from an Nginx instance. And even better than Nginx would be to serve the static assets from a CDN (Content Delivery Network). &lt;a href="https://aws.amazon.com/cloudfront/"&gt;CloudFront&lt;/a&gt; is the CDN from Amazon. If you are using AWS anyway, that's your goto CDN. &lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;p&gt;This article assumes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge about Ruby, Git, GitHub Actions, Docker and the Rails framework&lt;/li&gt;
&lt;li&gt;The application uses a GitHub Action for deployment&lt;/li&gt;
&lt;li&gt;The application uses AWS ECS Fargate for running Docker containers&lt;/li&gt;
&lt;li&gt;The application uses AWS S3 and AWS CloudFront for statis assets&lt;/li&gt;
&lt;li&gt;The AWS infrastructure is already setup an not a topic of this article.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  S3 and CloudFront
&lt;/h2&gt;

&lt;p&gt;A CloudFront CDN bucket is always linked to an S3 bucket. The content of the S3 bucket is then mirrored on the CloudFront bucket. That means, during deployment, we need to upload our static assets to the right S3 bucket, which is linked to our CloudFront bucket. &lt;br&gt;
If you want to learn how to correctly setup CloudFront &amp;amp; S3, read &lt;a href="https://medium.com/@tranduchanh.ms/optimize-rails-app-performance-with-rails-amazon-cloudfront-e3b305f1e86c"&gt;this article&lt;/a&gt; or the official &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html"&gt;AWS docs&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Rails configuration
&lt;/h2&gt;

&lt;p&gt;In your Rails application under &lt;code&gt;config/environments/production.rb&lt;/code&gt; you can configure an asset host like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Enable serving of images, CSS, and JS from an asset server.&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;action_controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;asset_host&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'RAILS_ASSET_HOST'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above the asset host is pulled from the ENV variable &lt;code&gt;RAILS_ASSET_HOST&lt;/code&gt;, which is set during deployment. If you deploy your Rails application to ECS Fargate, you will have somewhere an ECS task-definition.json for your application. In that task-definition.json, in the environment section, you would set the &lt;code&gt;RAILS_ASSET_HOST&lt;/code&gt; ENV variable, like this here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; 
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RAILS_ASSET_HOST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; 
  &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://d32v8iqllp6n8e.cloudfront.net/"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ENV variable is pointing directly to your CloudFront CDN URL. &lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Action
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; are a great way to trigger tests, builds, and deployments. Your GitHub Action configuration might look like this one in &lt;code&gt;.github/workflows/aws-test-deploy.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;aws/test&lt;/span&gt;

&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to AWS Test Cluster&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eu-central-1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Amazon ECR&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;login-ecr&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/amazon-ecr-login@v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above configuration tells GitHub to trigger the Action on each git push to the &lt;code&gt;aws/test&lt;/code&gt; branch. The Action will be performed on the latest Ubuntu instance. The current source code will be checked out from the Git repository into the Ubuntu instance. Furthermore, the &lt;code&gt;aws-actions&lt;/code&gt; module will be configured with the AWS credentials we stored in the GitHub secret store of the Git repository.&lt;/p&gt;

&lt;p&gt;The next part of the config file contains the important part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build, tag, and push image to Amazon ECR&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-image&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ECR_REGISTRY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.login-ecr.outputs.registry }}&lt;/span&gt;
    &lt;span class="na"&gt;ECR_REPOSITORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ve/web-test&lt;/span&gt;
    &lt;span class="na"&gt;IMAGE_TAG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.sha }}&lt;/span&gt;
    &lt;span class="na"&gt;TEST_AWS_CONFIG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.TEST_AWS_CONFIG }}&lt;/span&gt;
    &lt;span class="na"&gt;TEST_AWS_CREDENTIALS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.TEST_AWS_CREDENTIALS }}&lt;/span&gt;
    &lt;span class="na"&gt;TEST_RAILS_MASTER_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.TEST_RAILS_MASTER_KEY }}&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;mkdir .aws&lt;/span&gt;
    &lt;span class="s"&gt;echo "$TEST_AWS_CONFIG" &amp;gt; .aws/config&lt;/span&gt;
    &lt;span class="s"&gt;echo "$TEST_AWS_CREDENTIALS" &amp;gt; .aws/credentials&lt;/span&gt;
    &lt;span class="s"&gt;echo "$TEST_RAILS_MASTER_KEY" &amp;gt; config/master.key&lt;/span&gt;
    &lt;span class="s"&gt;echo "test-ve-web-assets" &amp;gt; s3_bucket.txt&lt;/span&gt;
    &lt;span class="s"&gt;docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .&lt;/span&gt;
    &lt;span class="s"&gt;docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG&lt;/span&gt;
    &lt;span class="s"&gt;echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we set a bunch of ENV variables for the ECR Docker Registry on AWS and the image tag name, which will be equal to the latest commit SHA of the current branch. &lt;/p&gt;

&lt;p&gt;Then we set the ENV variable &lt;code&gt;TEST_AWS_CONFIG&lt;/code&gt; to the value of the GitHub secret &lt;code&gt;${{ secrets.TEST_AWS_CONFIG }}&lt;/code&gt;, which contains the regular AWS config file content for the current runtime. On your localhost, you find that file under &lt;code&gt;~/.aws/config&lt;/code&gt;. Usually, that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
region = eu-central-1
output = json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we set the ENV variable &lt;code&gt;TEST_AWS_CREDENTIALS&lt;/code&gt; to the value of the GitHub secret &lt;code&gt;${{ secrets.TEST_AWS_CREDENTIALS }}&lt;/code&gt;, which contains the AWS credentials for the current runtime. On your localhost, you find that file under &lt;code&gt;~/.aws/credentials&lt;/code&gt;. Usually, that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
aws_access_key_id = ABCDEF123456789
aws_secret_access_key = abcdefghijklmno/123456789
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;TEST_AWS_CREDENTIALS&lt;/code&gt; variable has to contain AWS credentials that have the permission to upload files to our corresponding S3 bucket. &lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;run&lt;/code&gt; section we pipe the content of &lt;code&gt;TEST_AWS_CONFIG&lt;/code&gt; into the file &lt;code&gt;.aws/config&lt;/code&gt; in the current work directory. And we pipe the content of &lt;code&gt;TEST_AWS_CREDENTIALS&lt;/code&gt; into &lt;code&gt;.aws/credentials&lt;/code&gt;. And we pipe the name of the S3 bucket ("test-ve-web-assets") into the file &lt;code&gt;s3_bucket.txt&lt;/code&gt;. &lt;br&gt;
Now all the necessary credentials for the S3 upload are in the current working directory. Now we can start to build our Docker image with &lt;code&gt;docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Dockerfile
&lt;/h2&gt;

&lt;p&gt;Our Dockerfile describes a so-called &lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/"&gt;multi-stage build&lt;/a&gt;. Multi-stage builds are a great way to clean up Docker layers that contain sensitive information, like for example AWS credentials. &lt;/p&gt;

&lt;p&gt;Our Dockerfile starts like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM versioneye/base-web:1.2.0 AS builderAssets

WORKDIR /usr/src/app_build

COPY .aws/config /root/.aws/config
COPY .aws/credentials /root/.aws/credentials
COPY . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As base image we start with &lt;code&gt;versioneye/base-web:1.2.0&lt;/code&gt;, which is a preconfigure Alpine Docker image with some preinstalled Ruby &amp;amp; Node dependencies. It's based on the &lt;code&gt;ruby:2.7.1-alpine&lt;/code&gt; Docker image. &lt;/p&gt;

&lt;p&gt;We copy our &lt;code&gt;.aws/config&lt;/code&gt; to &lt;code&gt;/root/.aws/config&lt;/code&gt; and our &lt;code&gt;.aws/credentials&lt;/code&gt; to &lt;code&gt;/root/.aws/credentials&lt;/code&gt;, because the AWS CLI looks for that files at that place by default. &lt;/p&gt;

&lt;p&gt;We copy all files from the current git branch to the working directory in the Docker image at &lt;code&gt;/usr/src/app_build&lt;/code&gt;. Now we have all files in place inside the Docker image. &lt;/p&gt;

&lt;p&gt;As next step we need to install the AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install AWS CLI
RUN apk add python3; \
    apk add curl; \
    mkdir /usr/src/pip; \
    (cd /usr/src/pip &amp;amp;&amp;amp; curl -O https://bootstrap.pypa.io/get-pip.py); \
    (cd /usr/src/pip &amp;amp;&amp;amp; python3 get-pip.py --user); \
    /root/.local/bin/pip install awscli --upgrade --user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the AWS CLI is installed and the AWS credentials are at the right place. With the next step we will: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;delete unnecessary files from the current working dir. &lt;/li&gt;
&lt;li&gt;install NPM dependencies&lt;/li&gt;
&lt;li&gt;install Gem dependencies &lt;/li&gt;
&lt;li&gt;precompile the static Rails assets&lt;/li&gt;
&lt;li&gt;upload the static Rails assets to our S3 bucket
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Compile assets and upload to S3
RUN rm -Rf .bundle; \
    rm -Rf .aws; \
    rm -Rf .git; \
    rm bconfig; \
    yarn install; \
    bundle config set without 'development test'; \
    bundle install; \
    NO_DB=true rails assets:precompile; \
    /root/.local/bin/aws s3 sync ./public/ s3://`cat s3_bucket.txt`/ --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/assets s3://`cat s3_bucket.txt`/assets --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/assets/font-awesome s3://`cat s3_bucket.txt`/assets/font-awesome --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/packs s3://`cat s3_bucket.txt`/packs --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/packs/js s3://`cat s3_bucket.txt`/packs/js --acl public-read;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the last 5 lines we are simply using the AWS CLI to sync files from inside the Docker image with the S3 bucket which is defined in the &lt;code&gt;s3_bucket.txt&lt;/code&gt; file. The AWS CLI which currently runs on Alpine Linux doesn't support recursive uploads, that's why we need to do the sync command for each directory separately. &lt;/p&gt;

&lt;p&gt;Now the static Rails assets are uploaded to AWS S3/CloudFront. But the AWS credentials are still stored in the Docker layers. If we publish that Docker image to a public Docker registry, somebody could fish out the AWS credentials from the Docker layer and compromise our application. That's why we are using Docker multi-stage builds to prevent that from happening.&lt;br&gt;
The next part of the Dockerfile looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM versioneye/base-web:1.2.0 as builderDeps

COPY --from=builderAssets /usr/src/app_build /usr/src/app

WORKDIR /usr/src/app

RUN yarn install --production=true; \
    bundle config set without 'development test'; \
    bundle install;

EXPOSE 8080

CMD bundle exec puma -C config/puma.rb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above lines are starting pretty much a completely new Docker build. We start again with our Docker base image and we copy all files from the previous build from &lt;code&gt;/usr/src/app_build&lt;/code&gt; to our current build into &lt;code&gt;/usr/src/app&lt;/code&gt;. We install again the dependencies for Node.JS and Ruby, we expose Port 8080 and we set the run command with &lt;code&gt;CMD&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The Docker image we get out there does NOT include any AWS credentials and also no AWS CLI! It contains only the application code and the corresponding dependencies. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We are using a Docker multi-stage build to upload static files to S3 and to leave no traces behind. The first stage we use to install the AWS CLI, AWS secret credentials and to perform the actual upload of the static assets to S3/CloudFront. &lt;/p&gt;

&lt;p&gt;The 2nd stage we use to install the application dependencies and to configure the Port and the CMD command. The Docker image we get after the 2nd stage does NOT include any AWS secrets and no AWS CLI. &lt;/p&gt;

&lt;p&gt;Let me know what you think about this strategy. You find it useful? Any improvements? &lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>docker</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Nginx forward proxy - as Docker image</title>
      <dc:creator>Robert Reiz</dc:creator>
      <pubDate>Fri, 08 Jan 2021 10:26:05 +0000</pubDate>
      <link>https://dev.to/reiz/nginx-forward-proxy-as-docker-image-1g7</link>
      <guid>https://dev.to/reiz/nginx-forward-proxy-as-docker-image-1g7</guid>
      <description>&lt;p&gt;&lt;a href="https://nginx.org/en/"&gt;Nginx&lt;/a&gt; is a very fast HTTP and reverse proxy server. Usually, Nginx is used to serve and cache static assets or as proxy or load balancer for incoming traffic to application servers. But it can be used as forward proxy as well.&lt;/p&gt;

&lt;p&gt;Assume you have a network where you want to control outgoing traffic. You either want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deny all outgoing calls by default and only allow HTTP(S) calls to whitelisted URLs.&lt;/li&gt;
&lt;li&gt;Allow all outgoing calls by default and only block HTTP(S) calls to blacklisted URLs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Docker daemon can be configured that way that it routes all traffic through a proxy. This proxy can be an Nginx which is configured as forward proxy.&lt;/p&gt;

&lt;p&gt;I have a build a Docker image which contains Nginx preconfigured as forward proxy. Check out the full notes and Docker at &lt;a href="https://github.com/reiz/nginx_proxy"&gt;this GitHub repo&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>nginx</category>
      <category>proxy</category>
      <category>docker</category>
    </item>
    <item>
      <title>Redirects with AWS S3</title>
      <dc:creator>Robert Reiz</dc:creator>
      <pubDate>Sat, 05 Dec 2020 11:13:10 +0000</pubDate>
      <link>https://dev.to/reiz/redirects-with-aws-s3-886</link>
      <guid>https://dev.to/reiz/redirects-with-aws-s3-886</guid>
      <description>&lt;p&gt;Assume you have a web application deployed on AWS ECS Fargate, with an ALB (Application Load Balancer) in front of it. The ALB is doing the SSL termination as well. Your domain &lt;code&gt;mydomain.com&lt;/code&gt; is mapped to the ALB and everything works fine! &lt;/p&gt;

&lt;p&gt;But now you want to redirect the traffic from &lt;code&gt;www.mydomain.com&lt;/code&gt; to your main domain &lt;code&gt;mydomain.com&lt;/code&gt;. Of course you could create another ALB for that, but that would increase your monthly AWS bill. A much better and cheaper solution is to use S3 for that. &lt;/p&gt;

&lt;p&gt;First, let's create a new S3 bucket for that purpose. If you work with &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;, you just need to customize this code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"www_redirect"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;s3_www&lt;/span&gt;
  &lt;span class="nx"&gt;acl&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"public-read"&lt;/span&gt;

  &lt;span class="nx"&gt;website&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;redirect_all_requests_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://mydomain.com"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;s3_www&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't use &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;, simply read this article to find out how to do it manually. &lt;/p&gt;

&lt;p&gt;That code snippet creates a new empty S3 bucket with public read permissions. The S3 bucket is empty. It doesn't contains any files/objects. It is configured that way that it redirects all requests immediately to &lt;code&gt;https://mydomain.com&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Now we only need to map the subdomain &lt;code&gt;www&lt;/code&gt; to that S3 bucket. In your DNS Zone file you have to create an &lt;code&gt;A&lt;/code&gt; record with the value of the S3 bucket address. If your domain is managed by AWS Route 53, you navigate to the desired zone file and click "Create record". Then you should see something like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bZm4mv5k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dputzaxab3onyp3csa2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bZm4mv5k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dputzaxab3onyp3csa2o.png" alt="Route 53 routing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simple routing is fine for us. In the next form you have to define the subdomain "www" and choose the Endpoint to which it should be mapped. Here we have to choose "Alias to S3 website endpoint". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mL-toS_x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4t8tre7kekadh7wp05uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mL-toS_x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4t8tre7kekadh7wp05uj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you have to choose your region and the corresponding S3 bucket, which we just created before. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eGna2vge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1d8r2hfi08y3cfsbphje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eGna2vge--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1d8r2hfi08y3cfsbphje.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! Now the subdomain "www" is mapped to the S3 bucket, which will redirect every request to "&lt;a href="https://mydomain.com"&gt;https://mydomain.com&lt;/a&gt;". &lt;/p&gt;

</description>
      <category>aws</category>
      <category>redirects</category>
      <category>s3</category>
    </item>
    <item>
      <title>How to speed up your daily Docker builds</title>
      <dc:creator>Robert Reiz</dc:creator>
      <pubDate>Thu, 14 May 2020 16:38:38 +0000</pubDate>
      <link>https://dev.to/reiz/how-to-speed-up-your-daily-docker-builds-pkp</link>
      <guid>https://dev.to/reiz/how-to-speed-up-your-daily-docker-builds-pkp</guid>
      <description>&lt;p&gt;Nowadays, there is no way around Docker. It's a great technology that ensures that your software works on production the same as on your laptop. &lt;/p&gt;

&lt;p&gt;If you have a medium size Node.JS or Ruby on Rails project, with a hand full of dependencies, this command can take a couple of minutes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; orga/project:1.0.0 &lt;span class="nb"&gt;.&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Especially if you have dependencies with native extensions like libxml or sassc, the build will take a long time.&lt;/p&gt;

&lt;p&gt;The build time can be reduced dramatically if you are using a Docker base image which includes already the majority of the needed dependencies. That's why I have, in most of my projects, a directory called &lt;code&gt;docker_base&lt;/code&gt;, which contains a Dockerfile and package manager files for my base image. For a typical Ruby on Rails project the Dockerfile would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ruby:2.5-alpine&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; RAILS_ENV=production&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app_base&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; docker_base/Gemfile .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json .&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apk update&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add build-base&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add libxml2-dev&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add libxslt-dev&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add ruby-nokogiri&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add yarn&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apk add tzdata&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    yarn &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--production&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    bundle config build.nokogiri &lt;span class="nt"&gt;--use-system-libraries&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    bundle &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--without&lt;/span&gt; development &lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first &lt;code&gt;COPY&lt;/code&gt; command adds a Gemfile with the most important dependencies. These Gemfile does not include private (closed source) dependencies. That way the Docker base image can be later hosted on a public Docker Hub repository. The second &lt;code&gt;COPY&lt;/code&gt; command adds the package.json file for the frontend dependencies. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;RUN&lt;/code&gt; command is doing all the hard work which we want to avoid on our daily builds. It's installing all the native system libraries which are required for the Ruby and Node dependencies. In the last 3 lines we finally install all the Node and Ruby dependencies. &lt;/p&gt;

&lt;p&gt;Let's build the base image like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; versioneye/base-web:1.0.0 &lt;span class="nb"&gt;.&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now the regular Dockerfile in the root directory of the project can inherit from this base image. The first line of that Dockerfile would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; versioneye/base-web:1.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The dependencies in your Gemfile and package.json might change every day. That's why it's important that you run the install steps again in your main Dockerfile! That way new dependencies will be installed. However, the build time shrinks dramatically because the base image contains already the majority of the dependencies. In my case I could reduce the build time from 5 minutes to under 1 minute! &lt;/p&gt;

&lt;p&gt;What do you think about this method? Do you have another trick to reduce the build time of your daily Docker build? &lt;/p&gt;

</description>
      <category>docker</category>
      <category>ruby</category>
      <category>node</category>
    </item>
  </channel>
</rss>
