DEV Community

Cover image for Deploy a Unity WebGL Package to AWS S3 Static Site with GitHub Actions
Kevin White
Kevin White

Posted on

Deploy a Unity WebGL Package to AWS S3 Static Site with GitHub Actions

I've been learning about Unity recently and wanted to have an easily shareable artifact. It seemed like a common use case, but I ended up having to pull disparate pieces and learnings together to get it working. In this article I'll review how I setup GitHub Actions CI/CD for a Unity WebGL package deployed to an AWS S3 static site... in my case, https://unity.kwhitejr.com.

Just take me to GitHub, I'll figure it out.

The setup can easily be tweaked to accommodate other package types. This is intended to provide a baseline!

Prerequisites

  1. An AWS account.
  2. A GitHub account.
  3. A Unity account.
  4. Terraform (used for one-time setup of AWS resources).

Setup for these tools is outside the scope of this article. Some basic know-how with each tool will help avoid headaches.

Overview

GitHub to AWS diagram
The goal is to deploy your Unity WebGL package to AWS S3 storage, accessible as a static site. This article assumes that you already have a domain and we will deploy it to a subdomain (such as https://unity.yoursite.com).

A rough outline of the steps:

  1. Build AWS resources with Terraform (one time setup).
  2. Source control your Unity project with GitHub (one time setup).
  3. Acquire a Unity personal-use license (one time setup).
  4. Use GitHub Actions to build and deploy your Unity project as a WebGL package to AWS S3 (iterable final state).

Setup AWS Resources with Terraform

In this section I'll review setup of AWS resources to enable a subdomain: S3 storage for the origin, Route53 for DNS routing, CloudFront for CDN, and ACM for an https certificate.

This process is helpful for setting up any static site subdomain, regardless of whether you use it for Unity or not.

Terraform Modules

Terraform is a framework for deploying infrastructure as code. Because it is code, common use cases (such as static sites) can be formed into version-controlled modules. In order to set up the AWS resources for this project, I leaned on pre-existing and well-vetted modules from CloudPosse:

Here are the modules in action, annotated with notes. Your mileage may vary, but I had to deploy the ACM Request Certificate alone first, then re-run with the CloudFront S3 CDN module. Additionally, I wrote a quick helper script to speed up iteration time.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

locals {
  domain_name     = "kwhitejr.com" # Use your own domain.
  sub             = "unity"        # Choose your subdomain
  sub_domain_name = "${local.sub}.${local.domain_name}"
}

# My domain, kwhitejr.com, is already a Route53 Zone on AWS.
# You may have extra work to get your domain setup in AWS.
data "aws_route53_zone" "main" {
  name         = local.domain_name
  private_zone = false
}
##### NOTE
# Annoyingly, these modules only created successfully 
# when doing the acm-request-certificate by itself first,
# then adding in the cloudfront-s3-cdn on a second run
#####

module "acm_request_certificate" {
  source = "cloudposse/acm-request-certificate/aws"

  # Cloud Posse recommends pinning every module to a specific version
  version                           = "0.17.0"
  domain_name                       = local.sub_domain_name
  process_domain_validation_options = true
  ttl                               = "300"
}

module "cdn" {
  source = "cloudposse/cloudfront-s3-cdn/aws"

  version = "0.84.0"
  name    = local.sub_domain_name

  # DNS Settings
  aliases                 = [local.sub_domain_name]
  dns_alias_enabled       = true
  parent_zone_id          = data.aws_route53_zone.main.zone_id
  allow_ssl_requests_only = false

  # Caching Settings
  default_ttl = 300
  compress    = true

  # Website settings
  website_enabled             = true
  s3_website_password_enabled = true
  index_document              = "index.html"
  error_document              = "index.html"

  acm_certificate_arn = module.acm_request_certificate.arn

  # NOTE: not sure why this dependency assertion doesn't seem to work.
  # Deploy of this module fails (for me) unless the ACM is deployed first and independently.
  depends_on = [module.acm_request_certificate]
}
Enter fullscreen mode Exit fullscreen mode

What Did We Build and Why?

Route53 handles DNS routing; essentially, it tells the world that https://unity.yoursite.com redirects to your S3 bucket static site, which holds your website code (e.g. index.html). Sitting between them is a CloudFront CDN, which handles caching.

The S3 bucket is configured to be public and therefore makes the static assets available to the world... except not. Public S3 buckets are a big security no-no. The workaround is make the bucket public but enforce password access. Only the CDN knows the password (this is automated by the CloudPosse module).

To convince yourself, try accessing the bucket assets via its S3 url. If everything has gone according to plan, then you shouldn't be able to!

Once you successfully create the AWS resources, you can verify the setup by uploading a valid index.html document into your new bucket. Next, navigate to the declared subdomain and see your new site!

CAUTION. There are a couple cache layers that may give you bad signals while iterating: browser cache and CDN cache. When testing your initial setup, I recommend using incognito tabs and invalidating your CloudFront cache after major changes. Hopefully you avoid banging your head like I did.

Last but not least, you'll want to git ignore certain generated Terraform files.

Source Control Unity Project in GitHub

The next piece of the puzzle is to source control your Unity project in GitHub. I recommend colocating your Unity project with the Terraform setup files.

Brackeys has a great video on setting up GitHub source control for your Unity project. I followed his advice on sequestering Unity files to a subdirectory. I found this made repo management much easier.

There isn't much for me to add to the process. If you follow his video, you should end with a source-controlled Unity project colocated with your Terraform files.

// end state project structure
/MyUnityProject
  /etc
/terraform
  /make-bucket
    /main.tf
/scripts
  /make-bucket.sh
.gitignore
Enter fullscreen mode Exit fullscreen mode

Acquire a Unity Personal-Use License

The GitHub Actions pipeline uses game-ci to build and package the Unity project. You will need a Unity personal-use license to utilize these capabilities. Acquiring the license is a one-time setup requirement.

game-ci provides detailed instructions on acquiring the necessary license. Essentially...

  1. Manually run this workflow.
  2. Download the manual activation file that now appeared as an artifact and extract the Unity_v20XX.X.XXXX.alf file from the zip.
  3. Visit license.unity3d.com and upload the Unity_v20XX.X.XXXX.alf file.
  4. You should now receive your license file (Unity_v20XX.x.ulf) as a download. It's ok if the numbers don't match your Unity version exactly.
  5. Open Github > > Settings > Secrets.
  6. Create the following secrets:
    • UNITY_LICENSE - (Copy the contents of your license file into here)
    • UNITY_EMAIL - (Add the email address that you use to login to Unity)
    • UNITY_PASSWORD - (Add the password that you use to login to Unity)

These secrets will be consumed by the build and deploy pipeline.

Build and Deploy with GitHub Actions

All of the component pieces should now be ready to integrate to produce the deployment pipeline. Remember, the goal is to deploy the latest build to S3 upon merge to main.

Below is the GitHub Actions deploy pipeline, annotated with notes.

name: Build & Deploy Unity WebGL

# Fire pipeline on merge to main.
on:
  push:
    branches:
      - main
env:
  BUCKET: unitykwhitejrcom-origin # The bucket created by Terraform.
jobs:
  # The Build job primarily uses game-ci containers to test and build your package.
  build:
    name: Build For WebGL Platform
    runs-on: ubuntu-latest
    # Run this job from local Unity project directory.
    defaults:
      run:
        working-directory: EventDrivenCharacterController
    steps:
      # 1. Checkout code.
      - uses: actions/checkout@v2
        name: Checkout
      # 2. Use cache, if available.
      - uses: actions/cache@v2
        name: Cache
        with:
          path: Library
          key: Library-${{ hashFiles('Assets/**', 'Packages/**', 'ProjectSettings/**') }}
          restore-keys: |
            Library-

      # TODO: when you ready to run tests as part of your deploy, uncomment.
      # 3. Test
      # - uses: game-ci/unity-test-runner@v2
      #   name: Test
      #   env:
      #     UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
      #   with:
      #     githubToken: ${{ secrets.GITHUB_TOKEN }}

      # 4. Build WebGL Package
      # NOTE: you can expand target platforms to include other package types, if you like.
      - uses: game-ci/unity-builder@v2
        name: Build
        # Requires Unity-related secrets.
        env:
          UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
          UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
          UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
        with:
          targetPlatform: WebGL
          projectPath: EventDrivenCharacterController
      # 5. Upload Package
      - uses: actions/upload-artifact@v2
        name: Upload
        with:
          name: Build
          path: build

  # The Deploy job primarily uses AWS containers to upload assets to S3.
  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      # 1. Checkout code.
      - uses: actions/checkout@v3
        name: Checkout
      # 2. Download package from build job (the WebGL package).
      - uses: actions/download-artifact@v3
        name: Download artifacts
        with:
          name: Build
          path: build
      # 3. Configure AWS credentials. You will need to get these from IAM.
      - uses: aws-actions/configure-aws-credentials@v1
        name: Configure AWS credentials from Test account
        with:
          aws-region: us-east-1
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          # You can skip role-to-assume if your access keys contain power user credentials.
          # Best practice is to implement least-based privileges,
          # where the deploy service user (the pipeline) is limited to just assuming a role
          # and assumed role has the rights to actually create and update the assets.
          role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
          role-duration-seconds: 1200
          role-skip-session-tagging: true
      # 4. Sync assets to bucket.
      - name: "[Deploy Phase 1] Sync everything from public dir to S3 bucket"
        working-directory: build
        run: aws s3 sync ./WebGL/WebGL s3://$BUCKET --delete
      # 5. HERE BE DRAGONS - update metadata
      # "aws s3 sync" is a blunt tool.
      # Your assets will be synced, but missing necessary metadata,
      # e.g. content-type and content-encoding.
      # Although it is hacky, the next few commands iterate over certain file types
      # and re-copy them with the correct metadata.
      - name: "[Deploy Phase 2] Brotli-compressed files"
        working-directory: build
        run: |
          aws s3 cp ./WebGL/WebGL s3://$BUCKET \
            --exclude="*" --include="*.br" \
            --content-encoding br \
            --content-type="binary/octet-stream" \
            --metadata-directive REPLACE --recursive;

      - name: "[Deploy Phase 3] Brotli-compressed Javascript"
        working-directory: build
        run: |
          aws s3 cp ./WebGL/WebGL s3://$BUCKET \
            --exclude="*" --include="*.js.br" \
            --content-encoding br \
            --content-type="application/javascript" \
            --metadata-directive REPLACE --recursive;

      - name: "[Deploy Phase 4] Brotli-compressed WASM"
        working-directory: build
        run: |
          aws s3 cp ./WebGL/WebGL s3://$BUCKET \
            --exclude="*" --include="*.wasm.br" \
            --content-encoding br \
            --content-type="application/wasm" \
            --metadata-directive REPLACE --recursive;
Enter fullscreen mode Exit fullscreen mode

As you can surmise, there are a couple wrinkles with the pipeline.

  1. I'm not sure why the build contains double directories of /WebGL/WebGL/. I probably have some configuration slightly off, but not sure what. I reached out to Game CI team on discord but didn't hear anything back ¯_(ツ)_/¯
  2. The final few steps of the deploy job are suboptimal because you need to re-upload files to correct their metadata. I lifted this hack from here. It is a clever workaround, but I hope that aws s3 sync gets updated to allow similar capability. Otherwise, in theory you could use Terraform instead of AWS CLI to perform the file uploads. The aws_s3_object API includes the necessary metadata options, but this route feels like more trouble than it's worth.

Voilá. You should now be able to iterate on your Unity project and easily deploy updates to a static site to share with the world!

Top comments (0)