DEV Community

Cover image for Deploy Rails static assets to CloudFront CDN - during Docker build time
Robert Reiz
Robert Reiz

Posted on

Deploy Rails static assets to CloudFront CDN - during Docker build time

Ruby on Rails is a great Framework to build modern web applications. By default, all the static assets like CSS, JavaScript, and images, are served directly from the Ruby server. That works fine but doesn't offer the best performance. A ruby server like Puma or Unicorn is not optimized to serve static assets. A better choice would be to server the static assets from an Nginx instance. And even better than Nginx would be to serve the static assets from a CDN (Content Delivery Network). CloudFront is the CDN from Amazon. If you are using AWS anyway, that's your goto CDN.

Assumptions

This article assumes:

  • Basic knowledge about Ruby, Git, GitHub Actions, Docker and the Rails framework
  • The application uses a GitHub Action for deployment
  • The application uses AWS ECS Fargate for running Docker containers
  • The application uses AWS S3 and AWS CloudFront for statis assets
  • The AWS infrastructure is already setup an not a topic of this article.

S3 and CloudFront

A CloudFront CDN bucket is always linked to an S3 bucket. The content of the S3 bucket is then mirrored on the CloudFront bucket. That means, during deployment, we need to upload our static assets to the right S3 bucket, which is linked to our CloudFront bucket.
If you want to learn how to correctly setup CloudFront & S3, read this article or the official AWS docs.

Rails configuration

In your Rails application under config/environments/production.rb you can configure an asset host like this:

# Enable serving of images, CSS, and JS from an asset server.
  config.action_controller.asset_host = ENV['RAILS_ASSET_HOST']
Enter fullscreen mode Exit fullscreen mode

In the example above the asset host is pulled from the ENV variable RAILS_ASSET_HOST, which is set during deployment. If you deploy your Rails application to ECS Fargate, you will have somewhere an ECS task-definition.json for your application. In that task-definition.json, in the environment section, you would set the RAILS_ASSET_HOST ENV variable, like this here:

{ 
  "name" : "RAILS_ASSET_HOST", 
  "value": "https://d32v8iqllp6n8e.cloudfront.net/"
} 
Enter fullscreen mode Exit fullscreen mode

The ENV variable is pointing directly to your CloudFront CDN URL.

GitHub Action

GitHub Actions are a great way to trigger tests, builds, and deployments. Your GitHub Action configuration might look like this one in .github/workflows/aws-test-deploy.yml:

on:
  push:
    branches: 
    - aws/test

name: Deploy to AWS Test Cluster

jobs:
  deploy:
    name: Deploy
    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: eu-central-1

    - name: Login to Amazon ECR
      id: login-ecr
      uses: aws-actions/amazon-ecr-login@v1
Enter fullscreen mode Exit fullscreen mode

The above configuration tells GitHub to trigger the Action on each git push to the aws/test branch. The Action will be performed on the latest Ubuntu instance. The current source code will be checked out from the Git repository into the Ubuntu instance. Furthermore, the aws-actions module will be configured with the AWS credentials we stored in the GitHub secret store of the Git repository.

The next part of the config file contains the important part:

- name: Build, tag, and push image to Amazon ECR
  id: build-image
  env:
    ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
    ECR_REPOSITORY: ve/web-test
    IMAGE_TAG: ${{ github.sha }}
    TEST_AWS_CONFIG: ${{ secrets.TEST_AWS_CONFIG }}
    TEST_AWS_CREDENTIALS: ${{ secrets.TEST_AWS_CREDENTIALS }}
    TEST_RAILS_MASTER_KEY: ${{ secrets.TEST_RAILS_MASTER_KEY }}
  run: |
    mkdir .aws
    echo "$TEST_AWS_CONFIG" > .aws/config
    echo "$TEST_AWS_CREDENTIALS" > .aws/credentials
    echo "$TEST_RAILS_MASTER_KEY" > config/master.key
    echo "test-ve-web-assets" > s3_bucket.txt
    docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
    docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
    echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
Enter fullscreen mode Exit fullscreen mode

Here we set a bunch of ENV variables for the ECR Docker Registry on AWS and the image tag name, which will be equal to the latest commit SHA of the current branch.

Then we set the ENV variable TEST_AWS_CONFIG to the value of the GitHub secret ${{ secrets.TEST_AWS_CONFIG }}, which contains the regular AWS config file content for the current runtime. On your localhost, you find that file under ~/.aws/config. Usually, that looks like this:

[default]
region = eu-central-1
output = json
Enter fullscreen mode Exit fullscreen mode

Then we set the ENV variable TEST_AWS_CREDENTIALS to the value of the GitHub secret ${{ secrets.TEST_AWS_CREDENTIALS }}, which contains the AWS credentials for the current runtime. On your localhost, you find that file under ~/.aws/credentials. Usually, that looks like this:

[default]
aws_access_key_id = ABCDEF123456789
aws_secret_access_key = abcdefghijklmno/123456789
Enter fullscreen mode Exit fullscreen mode

The TEST_AWS_CREDENTIALS variable has to contain AWS credentials that have the permission to upload files to our corresponding S3 bucket.

In the run section we pipe the content of TEST_AWS_CONFIG into the file .aws/config in the current work directory. And we pipe the content of TEST_AWS_CREDENTIALS into .aws/credentials. And we pipe the name of the S3 bucket ("test-ve-web-assets") into the file s3_bucket.txt.
Now all the necessary credentials for the S3 upload are in the current working directory. Now we can start to build our Docker image with docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .

Dockerfile

Our Dockerfile describes a so-called multi-stage build. Multi-stage builds are a great way to clean up Docker layers that contain sensitive information, like for example AWS credentials.

Our Dockerfile starts like this:

FROM versioneye/base-web:1.2.0 AS builderAssets

WORKDIR /usr/src/app_build

COPY .aws/config /root/.aws/config
COPY .aws/credentials /root/.aws/credentials
COPY . .
Enter fullscreen mode Exit fullscreen mode

As base image we start with versioneye/base-web:1.2.0, which is a preconfigure Alpine Docker image with some preinstalled Ruby & Node dependencies. It's based on the ruby:2.7.1-alpine Docker image.

We copy our .aws/config to /root/.aws/config and our .aws/credentials to /root/.aws/credentials, because the AWS CLI looks for that files at that place by default.

We copy all files from the current git branch to the working directory in the Docker image at /usr/src/app_build. Now we have all files in place inside the Docker image.

As next step we need to install the AWS CLI:

# Install AWS CLI
RUN apk add python3; \
    apk add curl; \
    mkdir /usr/src/pip; \
    (cd /usr/src/pip && curl -O https://bootstrap.pypa.io/get-pip.py); \
    (cd /usr/src/pip && python3 get-pip.py --user); \
    /root/.local/bin/pip install awscli --upgrade --user;
Enter fullscreen mode Exit fullscreen mode

Now the AWS CLI is installed and the AWS credentials are at the right place. With the next step we will:

  • delete unnecessary files from the current working dir.
  • install NPM dependencies
  • install Gem dependencies
  • precompile the static Rails assets
  • upload the static Rails assets to our S3 bucket
# Compile assets and upload to S3
RUN rm -Rf .bundle; \
    rm -Rf .aws; \
    rm -Rf .git; \
    rm bconfig; \
    yarn install; \
    bundle config set without 'development test'; \
    bundle install; \
    NO_DB=true rails assets:precompile; \
    /root/.local/bin/aws s3 sync ./public/ s3://`cat s3_bucket.txt`/ --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/assets s3://`cat s3_bucket.txt`/assets --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/assets/font-awesome s3://`cat s3_bucket.txt`/assets/font-awesome --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/packs s3://`cat s3_bucket.txt`/packs --acl public-read; \
    /root/.local/bin/aws s3 sync ./public/packs/js s3://`cat s3_bucket.txt`/packs/js --acl public-read;
Enter fullscreen mode Exit fullscreen mode

In the last 5 lines we are simply using the AWS CLI to sync files from inside the Docker image with the S3 bucket which is defined in the s3_bucket.txt file. The AWS CLI which currently runs on Alpine Linux doesn't support recursive uploads, that's why we need to do the sync command for each directory separately.

Now the static Rails assets are uploaded to AWS S3/CloudFront. But the AWS credentials are still stored in the Docker layers. If we publish that Docker image to a public Docker registry, somebody could fish out the AWS credentials from the Docker layer and compromise our application. That's why we are using Docker multi-stage builds to prevent that from happening.
The next part of the Dockerfile looks like this:

FROM versioneye/base-web:1.2.0 as builderDeps

COPY --from=builderAssets /usr/src/app_build /usr/src/app

WORKDIR /usr/src/app

RUN yarn install --production=true; \
    bundle config set without 'development test'; \
    bundle install;

EXPOSE 8080

CMD bundle exec puma -C config/puma.rb
Enter fullscreen mode Exit fullscreen mode

The above lines are starting pretty much a completely new Docker build. We start again with our Docker base image and we copy all files from the previous build from /usr/src/app_build to our current build into /usr/src/app. We install again the dependencies for Node.JS and Ruby, we expose Port 8080 and we set the run command with CMD.

The Docker image we get out there does NOT include any AWS credentials and also no AWS CLI! It contains only the application code and the corresponding dependencies.

Summary

We are using a Docker multi-stage build to upload static files to S3 and to leave no traces behind. The first stage we use to install the AWS CLI, AWS secret credentials and to perform the actual upload of the static assets to S3/CloudFront.

The 2nd stage we use to install the application dependencies and to configure the Port and the CMD command. The Docker image we get after the 2nd stage does NOT include any AWS secrets and no AWS CLI.

Let me know what you think about this strategy. You find it useful? Any improvements?

Top comments (7)

Collapse
 
tlatsas profile image
Tasos Latsas

Thank you Robert for the great article! I am using a very similar setup and approach for the webpack(er) generated assets.

I wrote a gem called webpacker_uploader which parses the contents of the generated manifest.json file and then uploads all the file's entries to S3 using the aws-sdk-s3 gem.

By using this gem as an abstraction we can get away with storing credentials in .aws/credentials. We just call WebpackerUploader.upload!(provider) in a Rake task and we pass in the credentials through ENV variables from the CI. However, this approach will not work if you need to upload assets that are not present in the manifest file, as we rely on its contents.

Collapse
 
reiz profile image
Robert Reiz

Hey Tasos! Thanks for the note to webpacker_uploader. That looks interesting!

Collapse
 
rajbhandari profile image
Raj Bhandari

Hey Robert, Thank you for sharing this comprehensive guide.
I am also trying to solve a similar problem (serving bundles js an css via CDN).
I am wondering why I would need an S3 bucket - as the the rails server can act as the origin server for the CDN. It will be less moving parts (no S3 bucket to keep in sync and no test/staging/production environment buckets to worry about). Also, if I have those assets packaged within a Docker image, I would also have a nice self-contained artifact that would just work with or without the CDN or s3. It also means versioning across different branches and deployment environments would just work with CDN calling back on the same origin server that is producing the URL to these assets.
What am I missing?

Collapse
 
reiz profile image
Robert Reiz

Hey Raj,
The reason is performance. The Rails server is not optimised to deliver static content. If all your HTML pages point to a CDN, you take away a lot of traffic from your application servers. And as Danny Brown commented, you can update the static assets independently from your APIs.

Collapse
 
dannyrb profile image
Danny Brown

The biggest win, IMO, is you can deploy static asset updates independently. No contract changes from top level components or assets? Simply push the updated static assets; no need to actually build & deploy the container image

Collapse
 
yerassyl profile image
Yerassyl Diyas

Thank you for this article. But I have a question: what happens if for some reason one of s3 sync commands fail, or say during upload 1 or 2 files were not uploaded? Can we catch such issues?

Collapse
 
reiz profile image
Robert Reiz

That's a good question. To be honest I didn't think about that one. I would assume that the aws cli retries/resumes and if that doesn't work I would assume that it exits with exit code 1 so that the whole build fails. But I did not really test it. What is your experience on that topic?