DEV Community

Cover image for Static Blog: Using Hugo, S3 and deploying with Gitlab pipeline
Anderson
Anderson

Posted on • Originally published at mandax.com.br

Static Blog: Using Hugo, S3 and deploying with Gitlab pipeline

Picture by: @dsmacinnes

If you want to blog post by pushing a new commit, this article was made for you, here I'll try to explain how I deployed my blog and we'll use Aws, Gitlab and Hugo. The same concept can be applied using Circle CI, Heroku, Azure, Zeit, Github, Bitbucket, Jenkins and more. You can also skip the Hugo part if you already have a static content to deploy.

Requirements

  • Hugo
  • git

Required skills

  • git
  • markdown (to write the posts)

The problem

Since I started to work with development I always had friends and coworkers encouraging me to start writing and one of the reasons that I've never started it was the process of building a blog, choosing or making a layout, and everything around this universe.

I know that today we have great communities like dev.to to write, but to be honest, I wanted someplace to keep all my things together, not just code, but design, 3d, animations, and maybe my personal thoughts also, I wanted that the process of writing could be seamlessly incorporated on my activities as a developer and a static generated website to be deployed everywhere with cheap costs and no magic involved.

magic

The Solution

The first point of this solution is to use Git to deploy a new post without using any heavy Text Editor, just markdown. To do that I would need to make a continuous delivery pipeline triggered by a new commit.

Then, Hugo!
go hugo logo

Go Hugo

Install

As I'm using macOS, I installed Hugo using brew, but you can install it on any platform, just check it here.

brew install hugo
Enter fullscreen mode Exit fullscreen mode

Creating a new site

To create a new website you need to run this command:

hugo new site [your-site-name]
Enter fullscreen mode Exit fullscreen mode

Theme

Now we need to choose and install a theme for our website. The theme that I chose is Manis developed by Yurizal Susanto, a really minimalist, clean and beautiful theme and of course, I did my own modifications on it. You can check the available themes here.

To install a theme you just need to clone its repository as a submodule on your Hugo project:

git clone https://github.com/yursan9/manis-hugo-theme themes/manis
Enter fullscreen mode Exit fullscreen mode

Hugo has a configuration file called config.toml, this file contains the configuration of our website and the theme can have some specific configurations too, like colors, icons and pretty much anything that the developer of the theme put in there. This makes Hugo flexible and you can have themes really unique for a lot of different purposes.

To see what configurations our recently downloaded theme have, we can find it inside the theme folder on a folder called exampleSite, as the name says it’s an example website built with the theme and you can copy it’s config.toml and use it on your project by overwriting your default config.toml.

Running

To run the project locally, just type:

hugo server
Enter fullscreen mode Exit fullscreen mode

AWS - S3

Quick explanation

S3 Bucket works as a storage, as it is like a container to store your files of any kind, and luckily aws has a feature to serve your bucket as a website. So, by having some index.html inside it you will be able to see it online and also, if you have a domain you can use Route 53 to point it to your bucket, check the documentation here.

Creating a bucket

To create a bucket we just need to access the Aws console and find the S3 service section, once inside the page hit the blue button Create Bucket. A modal will appear so you can configure your bucket but, for this purpose, we don’t need to make any specific configuration yet, just name the bucket and we are good to go.

After creating our bucket we will need to set it as public readable, to do so we will need to click in the name of it and then in the Permissions tab, find the Bucket Policy link, add this JSON there and don’t forget to replace <your-bucket-name> with your bucket name:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "PublicReadForGetBucketObjects",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "*"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::<your-bucket-name>/*"
            }
        ]
    }
Enter fullscreen mode Exit fullscreen mode

Now we need to tell aws to host our bucket as a website and to do so, we need to go to the Properties tab and activate a Static website hosting setting the default file to index.html.
Your bucket URL will be shown on the top of the block.

Gitlab pipeline

Get your access key

Before the next step, we need to create an access key which will be used by Gitlab to synchronize our website files to our s3 bucket and the easiest way to do that would just create a key based on our administrator user, but we will not do it, because… Safety first, right?

safety first

The safest and correct way to do this is to create a policy which will grant just enough access to sync files with our bucket.

Creating a policy

Inside aws console, go to Services > IAM > Customer Managed Policies > Create Policy, then click on the JSON tab and paste this text, modifying <your-bucket-name> with the name of your bucket of course.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "ListObjectsInBucket",
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::<your-bucket-name>"
                ]
            },
            {
                "Sid": "AllObjectActions",
                "Effect": "Allow",
                "Action": "s3:*Object",
                "Resource": [
                    "arn:aws:s3:::<your-bucket-name>/*"
                ]
            }
        ]
    }
Enter fullscreen mode Exit fullscreen mode

This will grant access to read all objects inside the bucket and allow all object actions which are necessary to synchronize files.

Creating a user

Now that we have a policy, we need to assign it to a new user created specifically to the Gitlab access, this way we can revoke the access without compromise our main user.

On IAM page, add a new user with Programmatic Access and Attach existing policies directly. A list with a search bar will be shown for you, on this search bar you will type our policy name and add it to the user. After that, you can skip the next step and an Access Key ID and a Secret access key will be generated. Save those keys on a temporary file.

Continuous delivery

The idea here is to build your Hugo website and send it to your s3 bucket when a new commit would be pushed to Gitlab.

Gitlab CI/CD

Setting the aws access key

Inside your repository on Gitlab go to Settings > CI/CD > Variables and create two new variables:

Gitlab pipeline

  1. AWS_ACCESS_KEY_ID with the access key id as value
  2. AWS_SECRET_ACCESS_KEY with the secret key as value.
  3. Save it

If you are asking why not to put these keys inside the project or the deployment script, it's just to not expose them on the code and committing sensitive data like this is not recommended, because depending on the structure of the project and the team that will access the repository, you can leak the key. On the worst case scenario, the project will be using the administrator key and it will grant full access for who's found the keys.

Configuring CI/CD

Gitlab is smart, and it will look for instructions inside our repository that can trigger the pipeline automatically, this file’s name is .gitlab-ci.yml. Let’s create this file and give it two simple instructions, one for build Hugo website and other to synchronize the built files on the s3 bucket.

.gitlab-ci.yml

    stages:
      - build
      - deploy

    cache:
      paths:
        - public

    build:
      image: orus/hugo-builder:latest
      stage: build
      only:
        - master
      before_script:
       - git submodule sync --recursive
       - git submodule update --init --recursive
      script:
        - hugo

    deploy:
      image: xueshanf/awscli:latest
      stage: deploy
      only:
        - master
      script:
        - aws s3 sync public s3://<your-bucket-name>
Enter fullscreen mode Exit fullscreen mode

Quick explaination

The first service build will use the docker image orus/hugo-builder:latest that contains a version of the Hugo that will be used to build our project. This service will be only triggered when some commit is pushed to the master branch and before it will run a script that will clone the submodules inside the container. Remember that the Hugo theme is a submodule so if we just clone the repository, the submodule will not be present and the Hugo build will use the default theme or fail to build.

The second service deploy will synchronize the built files to our s3 bucket and don’t forget to replace the <your-bucket-name> with your bucket’s name.

Gitlab pipeline

That’s it

Now to deploy a new blog post, we just need to push it to the master branch and this concept can be applied on a bunch of different ways we can extrapolate it and add more steps to our pipeline, including unit tests and a lot of things.

I think this can be a good solution for a development company to deploy static pages efficiently without any manual effort, just automatically and safe.

Yeah, nice

References

Go Hugo

Gitlab Ci docs

Markdown Guide

Aws IAM Policies Guide

Aws S3 web hosting

Recommendations

Gitlab pipeline docs

Docker

Top comments (3)

Collapse
 
kensixx profile image
Ken Flake

This is a really good one! I am currently learning AWS concepts and essentials, and at the same time, I want to deploy some kind of a static blog site of my own. I want to learn CI/CD and I think this one will definitely guide me to the right direction. Thank you very much!!!!

Collapse
 
mandax profile image
Anderson

Thanks Ken!

I think the Gitlab CI/CD is a good point to start, It's free and easy to set up. I recommend you to learn more about Docker as well. Continuous integration is not hard to understand at all but you need to know what process you want to automate and what problems you want to solve because you can get really crazy on integrations, the possibilities are endless.

Collapse
 
tautorn profile image
Bruno Carneiro

wow! I love it <3