DEV Community

Cover image for AWS CDK - One-Step S3 Websites with esbuild
Matt Morgan for AWS Community Builders

Posted on

AWS CDK - One-Step S3 Websites with esbuild

I wrote previously about using aws-lambda-nodejs with esbuild to write Lambda functions in TypeScript and use inline building and bundling without the need for a separate build step.

When I first wrote that article, it was something of a curiosity to me. I was actually bundling my functions with webpack and thought that was going pretty well, but after a couple of iterations and a closer look, I was sold. The simplicity and versatility of this approach is just too good. Plus esbuild is so fast! I wanted to see if I could apply the same principles to building, bundling and deploying a UI application to an S3 Bucket website. I first looked to see if there was 3rd party support for such a construct, but it turns out that isn't needed and AWS CDK includes everything you need!

Table of Contents


Want to cut the line? Okay, here's the code repo!


Let's begin with the inevitable setup junk and then make a very simple React app. Don't like React? No problem. You could use basically anything, but I'm going to use TypeScript and bundling and React has good support for both.

Many readers will already be familiar with installing TypeScript and setting up linting and so forth. You can skip ahead to my repo to see what I did. If all this setup stuff gets you down, maybe check out projen. I'll call out a few of the dependencies I installed: esbuild, @aws-cdk/aws-s3, @aws-cdk/aws-s3-deployment, react and react-dom. Again, check the repo because there are a lot of dependencies needed.

I like to stick all those dependencies in a single package.json file at the root of my project. This is unorthodox, but it probably shouldn't be because it works quite well. More on that later.

Now I'll add a simple React app by putting an index.html file in ./website and a couple of tsx files in ./ui. These conventions aren't very important since we can import code from any part of the project but in this app, ./ui contains the source for the UI app and ./website will hold the html index file and also the build output.

That's it for my React code, but since I didn't use a tool like create-react-app, I still need a way to run a local server and bundle my code.


There's a lot to like about esbuild. It's written in golang and is very fast. It's a newer project, but it's gaining features like crazy. It's not quite as mature as webpack so it's still missing a few things like hot module reload, but workarounds exist. The docs are pretty good, but there's a lot of ground to cover for a tool like this. One thing that's easy to overlook is esbuild ships with a local dev server. No need to bring your own!

Like other bundlers, esbuild has a command line interface as well as a way to run build scripts. I intend to use a lot of the options, so I'll go for a build script. It's possible to use JavaScript for the build script, but I'm going to write mine in TypeScript so I can be sure I use the esbuild interfaces and call signatures correctly. This is easy enough to do with ts-node and is the exact same thing I'm already doing with CDK.

There are two commands that I'll want to use: build and serve. The serve command also builds the app so there's no need to use both of these together. I decided to use a single script for both since my build arguments won't change much. As this is going to be a npm script, I decided I'd give an optional argument to indicate my script is a build script, or else it will serve.

import { build, BuildOptions, BuildResult, serve, ServeResult } from 'esbuild';

const mode = process.argv[2];
const env = mode === 'build' ? 'production' : 'development';

const buildOptions: BuildOptions = {
  bundle: true,
  define: { 'process.env.NODE_ENV': `"${env}"` }, // must be double-quoted
  entryPoints: ['ui/index.tsx'],
  loader: { '.js': 'tsx' },
  logLevel: 'warning',
  minify: true,
  outdir: 'website/js',
  sourcemap: true,

export const run = (mode: string): Promise<BuildResult | ServeResult> => {
  if (mode === 'build') {
    return build(buildOptions);
  return serve({ servedir: 'website' }, buildOptions);

Enter fullscreen mode Exit fullscreen mode

The script will be called via ts-node esbuild.ts build for a build or ts-node esbuild.ts to serve. I'll add ts-node esbuild.ts as a script for the start script and the build one goes under build:website. Again, the conventions are loose, so get your own if you don't like mine.

"scripts": {
    "build": "npm run clean && npm run build:website",
    "build:website": "ts-node esbuild.ts build",
    "clean": "rimraf cdk.out coverage website/js",
    "deploy": "npm run clean && cdk deploy",
    "lint": "eslint . --ext=.js,.ts",
    "pretest": "npm run lint",
    "start": "npm run clean && ts-node esbuild.ts",
    "test": "jest --coverage"
Enter fullscreen mode Exit fullscreen mode

Based on the options I've chosen, I'm going to grab TypeScript files from ./ui and wind up with a bundled js file in ./website/js. This app isn't transpiling or minifying the html, but there are some plugins to do that with esbuild. I'll also use the define option to tell React I want to run in development mode when running the web server or production mode when bundling.

At this point I can npm start to check my progress.

localhost screenshot
Design skills!

Okay, that looks good (or like it's working anyway). I can also run npm run build and see my transpiled code under ./website/js. So the esbuild part of this is all set.

S3 Deployment

One way to do this is implement an S3 deployer with a nice interface like CDK-SPA-Deploy and then add an npm script like npm run build && cdk deploy, but doing that wouldn't achieve my dream of an integrated UI build to match my Lambda function build. To do that, we'll need to dig into a couple of modules.

aws-s3-deployment is still marked experimental at the time of this writing, so bear that in mind, but it works pretty well. The concept here is similar to what we saw in aws-lambda-nodejs, but instead of automatically integrating with esbuild, we need to provide our own arbitrary build process. There are a couple of limitations, but it still works quite well.

To begin with, we need to create an S3 bucket for website deployments.

    const websiteBucket = new Bucket(this, 'WebsiteBucket', {
      autoDeleteObjects: true,
      publicReadAccess: true,
      removalPolicy: RemovalPolicy.DESTROY,
      websiteIndexDocument: 'index.html',
Enter fullscreen mode Exit fullscreen mode

As a quick aside, I've enabled autoDeleteObjects which is a recent CDK feature. CloudFormation lacks the capability to auto-clean an S3 bucket to remove it and so any attempt to delete a bucket (as one often does when one is experimenting) will cause the delete to fail and you're stuck manually removing stuff. When we provision a bucket with autoDeleteObjects enabled, we'll automatically get a new Lambda function added to our stack that is triggered on stack delete and handles the cleanup. This is the same technique and framework I used to populate a database on deploy.

Creating the bucket is quite straightforward, so how do we bundle and ship the code? To do that we'll need @aws-cdk/aws-s3-deployment. This construct, which leans heavily on the core and assets constructs, will bundle code, stage it in a staging bucket, then move it to our website location.

The deployment construct expects an array of source bundles matching the ISource interface. We can produce such a bundle with Source.asset. Without a build process, we'd just give Source.asset a path to a directory or zip file, but if we look at the 2nd argument, AssetOptions, we can find an intriguing option to add bundling.

Asset bundling uses Docker to provide a runtime and command for the bundling. My experience with this on aws-lambda-nodejs is that it wasn't really welcome in an environment that is already likely to be running NodeJS. It makes sense for a polyglot framework like CDK to leverage Docker for various runtimes, but going without is faster (Docker adds lots of overhead) and the Docker runtimes are a bit hard to work with. In any case, Docker is the default at the time of this writing and we must implement tryBundle to attempt local bundling. This is a synchronous function that returns a boolean. If it returns true and if there is some kind of file in the expected output directory, then it was considered successful. If it returns false, or if the function isn't given an implementation, then the Docker build is attempted.

So saying, here's how I implemented the bundling:

    const execOptions: ExecSyncOptions = { stdio: ['ignore', process.stderr, 'inherit'] };

    const bundle = Source.asset(join(__dirname, '../ui'), {
      bundling: {
        command: ['sh', '-c', 'echo "Docker build not supported. Please install esbuild."'],
        image: BundlingDockerImage.fromRegistry('alpine'),
        local: {
          tryBundle(outputDir: string) {
            try {
              execSync('esbuild --version', execOptions);
            } catch /* istanbul ignore next */ {
              return false;
            execSync('npm run build', execOptions);
            copySync(join(__dirname, '../website'), outputDir, { ...execOptions, recursive: true });
            return true;
Enter fullscreen mode Exit fullscreen mode

Regrettably, the tryBundle function signature is synchronous and cannot resolve promises. I'd rather call my esbuild.ts programmatically instead of dropping to the shell for an npm script. execSync works fine enough and will throw errors dutifully, but it would be very fine if I had more options in this integration.

I used copySync from the fs-extra package to copy my build output into the expected outputDir. This is a small hack as it would be better for esbuild to send its output directly to the outputDir. We could probably provide that as another argument, but esbuild is not currently copying the index.html, so some extra work would be needed in the build script to manage that. I don't mind the hack in this example code, but to grow this, I'd probably look for the build script to manage all the files.

As for the Docker build, I actually gave it a try, but it looks like my entire workspace is shared with the container meaning it's linux OS now has my MacOS npm modules and I got some permission errors when I tried to npm install. I didn't really want the Docker build anyway, so I'm just pulling an alpine image to give an error message. Another opportunity for improvement here.

These minor complaints aside, it works very well! I need only add my bucket deployment and I'm off!

    new BucketDeployment(this, 'DeployWebsite', {
      destinationBucket: websiteBucket,
      sources: [bundle],
Enter fullscreen mode Exit fullscreen mode

I'll also put in a CfnOutput so I have easy access to my website url.

    new CfnOutput(this, 'webUrl', { value: websiteBucket.bucketWebsiteUrl });
Enter fullscreen mode Exit fullscreen mode

There's nothing else to it. Now we npm run deploy and our site is live!

Deployed S3 Website
Looks even better on AWS


This isn't an article about monorepos or how to fullstack, but I do want to call out how my CDK and React dependencies are living in harmony in the same package.json file. Normally when I see fullstack apps, each chunk of it has its own package.json file. I get why that's the norm, but the downside of it is all the linting and testing and all the other dependencies that go into each of those manifests. Do you use husky or lint-staged? Okay, time to install those eight times. Perhaps not a big deal, but I really believe we can gain something from simplifying our projects. There are some real issues with the JS ecosystem and all these node_modules all over the place isn't doing anyone any favors.

Okay, so that little rant was a prelude to testing. Try it. npm t runs eslint across the whole app (React code, CDK code and eventual backend/whatever code) and then runs all the unit tests as one suite.

Tests passed
I ❤️ this

The other thing I'll draw attention to is my ui-app-stack.spec.ts test which uses a Jest snapshot and matchers. I think snapshot testing is a great fit for CDK since the real purpose of CDK is to turn imperative code into CloudFormation. I can capture a snapshot of the cfn template and compare future runs to it. Of course we can write more tests on top of that, but to me the ROI of the snapshot is high here.

So what's with the matchers? Asset hashes shift every time there's a change in the code. For example, without the matchers, if I grab a snapshot and then change one character, I wind up with a failing diff that looks like this.

Snapshot diff of hashes

Matchers prevent that. It also means that my unit test doesn't notice these changes. To me, that's correct. I don't like the idea of developers getting in the mindset that you just blindly commit snapshots when they change. The diffs should be meaningful and represent intended results. For example:
Snapshot diff of removal policy

I'm heading for production and don't want my files to accidentally be deleted! This is a purposeful change which should be represented in the snapshot and cause a snapshot change. Anyway, I'm sure you see my point. I've had this discussion with others who feel that the snapshot should record the hashes since that was the state of the app at the time so YMMV. If you're in a situation with relatively low code churn, that could make a lot of sense!

Next Steps

I'm encouraged by the amount of customization I can put into my build job in CDK and work toward a goal of building cohesive fullstack apps. There are definitely some critical things missing from this little example, like CloudFront distributions, hosted zones, etc. I'm curious if the aws-s3-deployment module is going to be powerful enough to do all of that for me or if we'll need to rely on community constructs like cdk-spa-deploy.

Another thing to puzzle out is patterns for grabbing Cfn output and bundling that into our UI applications. For example, outputting an API Gateway URL and then including that in a UI application. It's actually fairly complicated to do that because Cfn variables don't resolve until the deployment is completely finished. There are a few options out there, most of which involve either building something via Lambda or putting extra config in S3 after a build. It'll be interesting to see if this project or community can support a really ergonomic way to manage this challenge. I may bring in some of my own ideas in a future post.


Top comments (0)