This article was originally published on Codica Blog.
When it comes to placing static websites on a hosting platform like Amazon, GitLab and AWS are very helpful tools for automating the deployment process.
In this article, we want to share our experience in deploying the project static files to Amazon S3 (Simple Storage Service) with the help of GitLab CI (Continuous Integration) and ACM (Certificate Manager) for getting SSL-based encryption.
More precisely, we will discuss the process of deployment static sites to Amazon Web Services (storing files on S3 and distributing them with CloudFront).
Glossary of terms
Before going ahead with the detailed guide, we would like to explain some of the terms that you will come across in this article.
Simple Storage Service (S3) is a web service offered by AWS. Basically, it is cloud object storage that allows uploading, storing, downloading, and retrieving almost any file or object. At Codica, we use this service to upload files of static websites.
CloudFront (CF) is a fast content delivery network (CDN) with globally-distributed proxy servers. It is based on S3 or another file source. Distribution is created and fixed on the S3 bucket or another source set by a user.
Amazon Certificate Manager (ACM) is a service by AWS that offers provision and management of free private and public SSL/TLS certificates. In our development practice, we use this helpful tool for deploying static files on Amazon CloudFront distributions. In such a way, we can secure all network communications.
Identity and Access Management (IAM) is an entity that you create in AWS to represent the person or application that uses it to interact with AWS. We create IAM users to permit GitLab to access and upload data to our S3 bucket.
Configuring AWS account (S3, CF, ACM) and GitLab CI
We assume that you already have an active GitLab account. Now you need to sign up/in to your AWS profile to get access to the instruments mentioned above.
If you create a new profile, you automatically go under Amazon’s free tier which allows deploying to S3 during the first year. However, you should be aware that there are certain limitations in the trial period usage.
1. Setting up an S3 Bucket
To set up S3, go to the S3 management console, create a new bucket, type in any name (i.e., yourdomainname.com) and the region. In the end, leave the default settings.
After that, set permissions to the public access in a new bucket. This way you make the website files accessible to users.
When permissions are set to public, move to the Properties tab and select the Static website hosting card. Tick the box “Use this bucket to host a website” and type your root page path (index.html by default) into the “Index document” field. Also, fill in the required information in the “Error document” field.
Finally, offer permissions to your S3 bucket to make your website visible and accessible to users. Go to the Permissions tab and click Bucket policy. Insert the following code snippet in the editor that appears:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourdomainname.com/*"
}
]
}
2. Creating an IAM user that will upload content to the S3 bucket
At this stage, you should create an IAM user to access and upload data to your bucket. To accomplish this, move to the IAM management console and press the ‘Add User’ button to create a new policy with the chosen name.
After that, add the following code. Do not forget to replace the ‘Resource’ field value with the name you created. Thus, you enable users to get data from your bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::yourdomainname.com/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "*"
}
]
}
The next step is creating a new user. Tick the Programmatic access in the access type section and assign it to the newly created policy.
Finally, click the ‘Create user’ button. There will be two important values: AWS_ACCES_KEY_ID
and AWS_SECRET_ACCESS_KEY
variables.
If you close the page, you will lose access to the AWS_SECRET_ACCESS_KEY
. That is why we recommend that you write down the key or download the .csv
file.
3. Setting up GitLab CI configuration
In the next stage of web hosting on Amazon, you need to establish the deployment process of your project to the S3 bucket. This stage supposes the correct set up of GitLab CI. Log in to your GitLab account and navigate to the project. Click Settings, then go to the CI / CD section and press the ‘Variables’ button in the dropdown menu. Here enter all the required variables, namely:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
S3_BUCKET_NAME
-
CDN_DISTRIBUTION_ID
.
You do not have a CDN_DISTRIBUTION_ID
variable yet, but it is not a problem. You will get it after creating CloudFront distribution.
After that, you need to tell GitLab how your website should be deployed to AWS S3. This can be done by adding the file .gitlab-ci.yml
to your app’s root directory. Simply put, GitLab Runner executes the scenarios described in this file.
Let’s now get familiar with .gitlab-ci.yml
and discuss its content step by step:
image: docker:latest
services:
- docker:dind
An image is a read-only template that contains the instructions for creating a Docker container. So, we specify the image of the latest version as a basis for executing jobs.
stages:
- build
- deploy
variables:
# Common
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
CDN_DISTRIBUTION_ID: $CDN_DISTRIBUTION_ID
On the code snippet above, we specify the steps to pass during our CI/CD process (build and deploy) with the variables they require.
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
Here we cache the content of /node_modules
to get the needed packages from it later, without downloading.
######################
## BUILD STAGE ##
######################
Build:
stage: build
image: node:11
script:
- yarn install
- yarn build
- yarn export
artifacts:
paths:
- build/
expire_in: 1 day
At the build stage, we collect and save the results in the build/ folder. The data is kept in the directory for 1 day.
######################
## DEPLOY STAGE ##
######################
Deploy:
stage: deploy
when: manual
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
- eval $(aws ecr get-login --no-include-email --region $AWS_REGION | sed 's|https://||')
In the before_script
parameter, we specify necessary dependencies to install for the deployment process.
script:
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
- aws cloudfront create-invalidation --distribution-id $CDN_DISTRIBUTION_ID --paths "/*"
Script
parameter allows deploying project changes to your S3 bucket and updating the CloudFront distribution.
When it comes to our development practice, there are two steps to pass during our CI/CD process: build and deploy. During the first stage, we make changes in the project code and save results in the /build folder. At the deployment stage, we upload the building results to the S3 bucket that updates the CloudFront distribution.
4. Creating CloudFront Origin
When you upload the important changes to S3, your final goal is to distribute content through your website pages by means of CloudFront. Let’s specify how this service works.
When users visit your static website, CloudFront offers them a cached copy of an application stored in different data centres all over the world.
Let’s assume that users open your website from the east coast of the USA. CloudFront will deliver the website copy from one of the servers there (New York, Atlanta, etc). This way, the service decreases the page load time and improves the overall performance.
To start with, navigate to the CloudFront dashboard and click the ‘Create Distribution’ button. Then type your S3 bucket endpoint in the ‘Origin Domain Name’ field. Origin ID will be generated automatically by autocompletion.
After that, move to the next section and tick ‘Redirect HTTP to HTTPS’ option under the Viewer Protocol Policy section. This way, you ensure serving the website over SSL.
Then, enter your real domain name within Alternate Domain Names (CNAMEs) field. For example, www.yourdomainname.com.
You get a default CloudFront SSL certificate so that your domain name will contain the .cloudfront.net
domain part.
In case you need to get a custom SSL, click the Request or Import a Certificate with the ACM button.
Replace your region with us-east-1, navigate to Amazon Certification Manager and add the desired domain name.
To confirm that you are the owner of the domain name, navigate to the settings of DNS, and specify CNAME there.
As soon as an SSL certificate is generated, choose the “Custom SSL Certificate” in this section.
At last, leave the remaining parameters set by default and click the ‘Create Distribution’ button.
This way, a new CloudFront origin is created that will be added to all the AWS edge networks within 15 minutes. You can navigate to the dashboard page and take a look at the State field which displays two conditions: pending or enabled.
As soon as the provisioning process is completed, you will see that the State field’s value is changed to Enabled. After that, you can visit the website by entering the created domain name address in an address bar.
Final thoughts
We were happy to share our practices on AWS web hosting and deploying static sites to Amazon (storing files on S3 and distributing them with CloudFront) using GitLab CI.
Read the full version of this article or check our other articles to get more tips on adopting the latest web app development techniques.
Top comments (5)
If the bucket has CloudFront in front of it and you want to require HTTPS access to the content (which S3 by itself doesn't support for website hosting), then it's not necessary to make the S3 bucket public.
You can make use of a CloudFront Origin Access Identity so that only CloudFront can access your bucket and direct access is blocked.
Hello Andrew,
Thanks for your interest in our article and valuable comments.
We agree - it is not necessary to set S3 bucket permissions to public. It is really possible if you are using HTTPS and CloudFront Origin Access Identity.
Best,
Codica Team
Is yarn export a valid one ? I have it in package.json as
I am getting a error in yarn run v1.22.5 as error Command "export" not found.
Works like a charm! thanks!
A little change that I have apply was change the apk install of python
apk add --no-cache curl jq python3 py3-pip
Thanks!
new alphine docker image throws error with python2. So you need to change your python install line in gitlab-ci.yml to