DEV Community

Filipe Motta for AWS Community Builders

Posted on • Updated on • Originally published at filipemotta.me

Migrating a static web site from Amplify to S3 and Cloud Front

Introduction

Here in this post I am going to show you how I migrated my static web site built in HUGO framework hosted from AWS Amplify to host using S3 to store files and CloudFront to distribution content using a valid certificate.

As AWS Amplify has a fully integration with HUGO framework, the build and deploy process is almost automatic, it is very easy, its only need to specify github's branch, give the right permissions and set the DNS properly. So on, the deploy is done.

Create Bucket

First all, you’ll need to create a S3 bucket. To host a static web site in S3 and use custom domain, the buckets name need to be the same of your custom domain, so I created two buckets according image bellow:

buckets

After, you need to say the AWS Bucket that you will use it to host a static web page, so it is necessary to setup it. First, in the “Properties” tab in the bucket you neeed to enable the option called “Static Web Site Hosting” according the image bellow:

Static Web Site Enabled

{
    "Version": "2012-10-17",
    "Id": "XXXXXXXXXXXX",
    "Statement": [
        {
            "Sid": "Stmt1XXXXXXX026159",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::filipemotta.me/*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

The Id and Sid are hidden in my example. You can use “Generate Policy” option to generate it. The importants options are a Effect, Principal, Action and Resource. The Action need to set “GetObject” and in the Resource does not forget to add a "*" on the final of the resource.

Migrate from GitHub Actions

It all you’ll need to setup in your bucket. Now, how I needed to migrate the files in my github that had all integrations with AWS Amplify applying Continuous Integration and Deploying, I needed a way to upload the files to S3 bucket and build the facilities that I had in the AWS Amplify, such as Continuous integration and Deploy. To achieve this, I used a GitHub Actions to automatically upload files to S3 bucket when I push my code to github. So, I setup a file into the ".github/workflows" with the follow content:

name: CI/CD Upload Website

on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-west-1'   # default us-east-1 - optional
        SOURCE_DIR: 'public'      # defaults  entire repository - optional
Enter fullscreen mode Exit fullscreen mode

Every push code to main branch will upload files to S3 to my bucket ( See options push and branches ). The first action checkout code to ubuntu VM and the second action is upload and sync files to my S3 bucket. Remember that is necessary to setup the secrets of your git repository with your AWS credentials.

AWS Secrets

Once you have to setup and pushed the code to github the files should now in the S3 Bucket. To test it, now in this step, It is a good idea to test the website access from the S3 bucket link. You can get the link in the properties tab on the specific bucket.

Use CloudFront to distribute content

Once you have gained access, you have two options, the first of them is use a custom domain pointing out directly to S3 bucket through your DNS, but the first strategy only accepts http requests. The second one is to use a custom domain with a valid certificate and use Cloud Front to distribute the content. I choosed the second one because I wanted to use https in my website.

So, the thing I had to do was to create a valid certificate through AWS Certificate Manager to include it on the CloudFront settings. To do it, I requested a certificate in the AWS Certificate Manager.

Certificate

One note to take it is that when I wrote this post, the only region available to create a valid certificate to use with Cloud Front is Virginia (us-east-1). So attend for this note.

So, the next step is to set up the cloudfront distribution to use a valid certificate created early and use the S3 bucket that now are integrated with your github repository.

Please, for now you’ll need to “Create a Distribution” option on the cloud front. The first important option is to select the S3 Bucket in the “Origin Domain Name” option. Another important option is redirect requests HTTP to HTTPS according image bellow:

Cloud Front

Now you need to select the “Price Class” and Alternate Domain Names according your needed and another important option is to select a Custom SSL Certificate. Now the certificate that you created on the Virginia region should appear here or you can upload your own Certificate. In my case I have chosen the option ACM (AWS Certification Manager) available. See bellow:

Cloud Front

Setup the DNS to configure your custom Domain

The last step that you have to do is setup your DNS to use your Custom Domain. To do this, you have to adding a CNAME record to Domain Name value CloudFront. In my case I have setup www.filipemotta.me to this value. How I would like too setup to access root domain ( filipemotta.me ), how is it known, it cannot create CNAME record to root domain, so I setup a Alias record no my root domain (filipemotta.me) to access the cloudfront domain value.

When I have finished my setup I had small problems to access my subdirectories in my domain (denied access). To solve this, I have to edit my CloudFront settings. In Origin and Domain Name Path option, I have to setup my fully S3 Domain name value instead the value selected when I created my CloudFront distributions. Take this option in S3 properties tab and set the Origin and Domain Name Path value. After this, I got access all my subdirectories and now can access throught https and http using S3 to static website and Cloudfront distributions.

See the discussion in the stackoverflow about this issue:

According to the discussion on AWS Developer Forums: Cloudfront domain redirects to S3 Origin URL, it takes time for DNS records to be created and propagated for newly created S3 buckets. The issue is not visible for buckets created in US East (N. Virginia) region, because this region is the default one (fallback).

Each S3 bucket has two domain names, one global and one regional, i.e:

global — {bucket-name}.s3.amazonaws.com
regional — {bucket-name}.s3.{region}.amazonaws.com
If you configure your CloudFront distribution to use the global domain name, you will probably encounter this issue, due to the fact that DNS configuration takes time.

However, you could use the regional domain name in your origin configuration to escape this DNS issue in the first place.

font: https://stackoverflow.com/questions/38735306/aws-cloudfront-redirecting-to-s3-bucket
Enter fullscreen mode Exit fullscreen mode

Now I have too a fully continuous integration and deploy using github actions when I push my code, similar I had in AWS Amplify.

Top comments (1)

Collapse
 
avinashdalvi_ profile image
Avinash Dalvi

Good information.