DEV Community

Hannah Cross
Hannah Cross

Posted on

How I deployed a Gatsby site to AWS S3


Choosing Gatsby.js

At work we are rethinking our front-end properties and the tools we use for them.

Our existing codebase is unnecessarily complex given our current needs so we looked to find a quick and efficient solution to throwing our primarily static pages out to the web.

Gatsby came out as a good option as it is fast, easy to configure and quick to edit and update. It has plenty of plugins which enable seemingly seamless integrations with AWS resources. The build time rendering is also good for speed and ease.

As part of a proof of concept exercise, I rebuilt part of our website using Gatsby.js, an AWS S3 bucket, AWS CloudFront and Serverless Framework.

Spoiler - IT WAS A SUPER FRIENDLY EXPERIENCE (with a few rough edges on the AWS side...)


Tools

AWS:

  • AWS CLI
  • AWS S3
  • AWS CloudFront

Gatsby:

  • gatsby-cli (I kicked off with a default starter)
  • gatsby-source-dynamodb

Serverless:

  • serverless
  • serverless-finch
  • serverless-cloudfront-invalidate

Building a Gatsby site

First off, I installed the Gatsby CLI:

yarn add gatsby-cli

Then to make my new gatsby project:

gatsby new name-of-your-site

which will create a directory with a default starter in it.

You can immediately get things running locally by

cd name-of-your-site and gatsby develop

You now have a gatsby site!

More detailed documentation lives here: Gatsby Docs


Connecting your Gatsby site to a database

Gatsby plugins are SO GOOD.

In just a few lines of config my Gatsby site was up and running with data from DynamoDB. (It is also just as easy to set things up with a SQL database too! Try gatsby-source-mysql)

yarn add gatsby-source-dynamodb

and in the gatsby-config.js

 plugins: [{
      resolve: "gatsby-source-dynamodb",
      options: {
        typeName: "NAME OF THIS CONNECTION",
        region: "REGION",
        params: {
          TableName: "TABLE NAME",
          // OTHER PARAMS HERE
        },
      },
    ]
Enter fullscreen mode Exit fullscreen mode

Gatsby uses graphql so to access the data in a page, you can create a graphql query which will then be automagically be passed into your component as props. It would look something like this:


import React from "react";
import { graphql } from "gatsby";

const AnimalsPage = ({ data }) => {
  const animals = data.allAnimals.nodes;
  return (
    <div>
      <h1>All Animals</h1>

      <ol>
        {animals.map((animal, i) => {
          return (
            <li key={i}>
              <a href={`/${animal.id}`}>{animal.name}</a>
            </li>
          );
        })}
      </ol>
    </div>
  );
};


// this data is passed into the above component as props 
export const query = graphql`
  query MyQuery {
    allAnimals {
      nodes {
        name
        id
      }
    }
  }
`;

export default AnimalsPage;


Enter fullscreen mode Exit fullscreen mode

To figure out your graphql query you can run gatsby develop and head to the GraphiQL interface at http://localhost:8000/__graphql

You can also add config to gatsby-node.js to fetch all your data and then generate pages based on a template for each data set. So if you have a database of animals each with the same data fields you can generate pages for each animal in the same layout.

Create a template under the templates folder and reference it in the gatsby-node.js file like this:

exports.createPages = async function({ actions, graphql }) {
  const { data } = await graphql(`
    query MyQuery {
      allAnimals {
        nodes {
          colour
          habitat
          country
          id
          diet 
          name
        }

      }
    }
  `);

  data.allAnimals.nodes.forEach(node => {
    const id = node.id;
    actions.createPage({
      path: `animal/${id}`,
      component: require.resolve(`./src/templates/page.js`),
      context: { node },
    });
  });
};

Enter fullscreen mode Exit fullscreen mode

The data will be passed into the page template through context at build time. You can instantly generate a number of repeating pages with unique values and set the slug as well!

Finally once you have created your website, you just need to run

yarn build

and it will build your site locally, updating the public/* files.
These pages will be the files you will save in your S3 bucket!


Deploying your project to S3

Setup your AWS CLI and AWS profile

This post presumes you already have an AWS account but if you haven't configured a profile on your AWS CLI you can do so quickly provided you have your access details.

Run aws configure and then populate the below fields.

AWS Access Key ID: e.g. AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key: e.g.wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name: e.g. eu-west-1
Default output format: e.g. json
Enter fullscreen mode Exit fullscreen mode

This will enable you to deploy to your AWS resources from your local environment without having to play around with AWS secrets and envs in your code.

You can check whether you have already set this up by running cat ~/.aws/credentials in your command line.


Create an S3 Bucket

  1. Go to the AWS website and login.
  2. Navigate to services/s3.
  3. Create bucket
  4. Choose a meaningful name for your bucket (you will use this in your serverless.yml later)
  5. Choose a region close to your location
  6. Hit create
  7. Under the "Properties" tab, choose Static Website Hosting.
  8. Set the Index and Error to index.html - this directs all traffic to your project which will handle the incoming HTTP requests.
  9. Under "Permissions" turn off the block to make the S3 bucket public. (You can also look into setting IAM roles and restricting access with other configuration)
  10. Save all this and you have your bucket!

Your serverless.yml

Add the plugins
yarn add serverless
yarn add serverless-finch
yarn add serverless-cloudfront-invalidate

As a side note: Check your repo for duplicate gitignore files and make sure to combine them.

You can now edit your serverless.yml file which will seem really long and full of comments but in the end you can get away with something as minimal as this:


service: service-name 

provider:
  name: aws

plugins:
  - serverless-finch
  - serverless-cloudfront-invalidate

custom:
  client:
    bucketName: bucket-name
    distributionFolder: public // the folder you want copied into S3

Enter fullscreen mode Exit fullscreen mode

With this all set up you can run your build script

yarn build

and then run

sls client deploy

It should be quite quick and return an AWS URL where you can access your files. You will need this to set up your CloudFront Distribution.

N.B. By using serverless-finch and running sls client deploy you will not create a CloudFormation stack. It will just efficiently deploy your files to S3.


Set up CloudFront

By putting your S3 files behind CloudFront, your end users will have much faster access to these files. The files get cached in multiple locations and when your user navigates to the endpoint it will return the closest cached version of those files - nice!

You will need to login to the AWS console and setup a CloudFront property and get a DISTRIBUTION ID.

  • Services/CloudFront
  • Create Distribution
  • Choose "Web Distribution"
  • Add the S3 Bucket url to the Origin Name field (it should have popped up in your terminal when you ran sls client deploy)
  • Everything else can be default
  • Create Distribution!

The shorter CloudFront domain will take a few minutes to deploy but once deployed you will be able to access your files from both URLs (S3 and CloudFront)

Importantly this will add another step...Cache Invalidation
Each time you run your deploy command you will also want to invalidate the CloudFront cache so that your newer files will be stored instead.

Just add the following to your serverless.yml under custom

cloudfrontInvalidate:
    distributionId: "DISTRIBUTION ID"
    items: 
      - "/index.html"
Enter fullscreen mode Exit fullscreen mode

Then you can run sls cloudfrontInvalidate and it will clear your cache!


And that's it!

So this was a really basic approach to deploying your gatsby site to AWS.

I found working with Gatsby.js so enjoyable. The documentation is friendly and there are plenty of articles and walkthroughs out there which help to clarify for various usecases. My favourite part of the process was being able to connect to a database with just a few lines of config and keep the data fetching out of the page creation. I could create 700+ pages with one template and the data fetching done in 20 lines in a config file. Then instead of generating each page on request, Gatsby builds all your files at build time meaning that you are only ever serving static html files to the end user. SPEEDY!

Deploying to AWS Resources is slightly more complex and I needed a hand with some of the config but that worked out fairly simple in the end too!

There are, of course, many other ways to do this however, hopefully this walkthrough provides a possible starting point for getting a personal project or a proof of concept up and running quickly!


Useful reading:

Top comments (3)

Collapse
 
piczmar_0 profile image
Marcin Piczkowski

Hi, Thanks for your post. I have an issue with CloudFront though.
It's not my project (github.com/JimLynchCodes/AWS-Deplo...) but I encountered the same problem that router seems not to work when I deploy site and paste the link to /page-2 directly in the browser. Do you know what it could be?
I saw this post also: hackernoon.com/hosting-static-reac...

I already redirect errors 404 and 403 to /index.html in CloudFront as many people advised.

Collapse
 
andrewbrown profile image
Andrew Brown πŸ‡¨πŸ‡¦

CloudFront is full of hard lessons because any mistake you make can result in waiting 20 minutes to adjust said mistake.

Collapse
 
crossy_h profile image
Hannah Cross

So true! My colleagues who had experience of this warned me and helped me walk through CloudFront to try and minimise the waiting time