<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paul Swail</title>
    <description>The latest articles on DEV Community by Paul Swail (@paulswail).</description>
    <link>https://dev.to/paulswail</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/paulswail"/>
    <language>en</language>
    <item>
      <title>The Simple Guide to Testing within your Serverless CI/CD Pipelines</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Mon, 29 Mar 2021 19:49:29 +0000</pubDate>
      <link>https://dev.to/paulswail/the-simple-guide-to-testing-within-your-serverless-ci-cd-pipelines-i</link>
      <guid>https://dev.to/paulswail/the-simple-guide-to-testing-within-your-serverless-ci-cd-pipelines-i</guid>
      <description>&lt;p&gt;Designing a testing strategy and a CI/CD pipeline for your serverless application go hand-in-hand. You can't do one without the other. And if you don't have a dedicated DevOps expert in your team, it can be hard to know what your pipeline should look like.&lt;/p&gt;

&lt;p&gt;You'll need to answer questions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What &lt;strong&gt;environments&lt;/strong&gt; do I need?&lt;/li&gt;
&lt;li&gt;What &lt;strong&gt;types&lt;/strong&gt; of tests/checks do I need to run?&lt;/li&gt;
&lt;li&gt;For each type of test:

&lt;ul&gt;
&lt;li&gt;What purpose does it serve? What type of &lt;strong&gt;failures&lt;/strong&gt; is this meant to detect?&lt;/li&gt;
&lt;li&gt;What are its &lt;strong&gt;dependencies/pre-requisites&lt;/strong&gt; to running? (e.g. code libraries, config settings, deployed resources, third-party APIs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When&lt;/strong&gt; should it run and against which environment?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, I'll help you answer these questions by walking through two relatively straightforward CI/CD workflows that I've used with my clients. For each workflow, we'll look at the different types of tests or checks that run at different stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflows overview
&lt;/h2&gt;

&lt;p&gt;Here are the two workflows I use, along with associated triggers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pull Request Workflow&lt;/strong&gt; — A new pull request (PR) is created, or an existing PR branch has new commits pushed to it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mainline Workflow&lt;/strong&gt; — A PR is merged to the main branch (or a commit is pushed directly to the main branch, if your settings allow this)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's look at each workflow in turn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pull Request Workflow (CI only)
&lt;/h2&gt;

&lt;p&gt;This simple workflow is triggered whenever a developer creates or updates a PR in GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IkP5vUG6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/ci-pull-request-pipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IkP5vUG6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/ci-pull-request-pipeline.png" alt="GitHub Actions Pull Request workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal of this workflow is to check the quality of the code changes a developer wishes to merge before any human code review is performed, and feedback any violations to the developer as quickly and precisely as possible.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Aside: The steps I run here are CI only with no CD. This  decision is a trade-off on the complexity involved in setting up dynamic ephemeral AWS environments for each Pull Request as opposed to the long-living environments of &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt;.  That said, you may very well find that it's worth investing this effort for your team in order to bring forward the feedback on failing integration/E2E tests to pre-merge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's what gets run within this workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;npm ci&lt;/code&gt; — A variation of &lt;code&gt;npm install&lt;/code&gt; that performs a clean install of &lt;code&gt;node_modules&lt;/code&gt; specified in the package-lock.json file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static analysis&lt;/strong&gt;

&lt;ol&gt;
&lt;li&gt;ESLint — Ensure the code matches agreed-upon standards to ensure consistency across the team (others have bikeshed so you don't have to!)&lt;/li&gt;
&lt;li&gt;Run TypeScript &lt;code&gt;tsc&lt;/code&gt; to check types&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;unit tests&lt;/strong&gt; using Jest. Unit tests here must run totally in memory (or filesystem) and not have any dependency on deployed resources or third-party APIs (which would make them integration/E2E tests, which we'll cover later).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I have recently started using &lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; to run this workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's really quick to configure and get working if you're already using GitHub to host your code.&lt;/li&gt;
&lt;li&gt;It delivers extremely fast feedback to developers directly within the GitHub UI where they created their PR (checks usually complete in under 30 seconds).&lt;/li&gt;
&lt;li&gt;Code reviewers can see that the automated checks have passed right within GitHub and don't need to check another CI system before starting their review.&lt;/li&gt;
&lt;li&gt;Given that I'm only running unit tests here, I don't need to worry about connecting my workflow to AWS accounts or other third party services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the code for configuring the workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ./.github/workflows/ci.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI&lt;/span&gt;

&lt;span class="c1"&gt;# Triggers the workflow on new/updated pull requests&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;14.x&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;# set this to configured AWS Lambda Node.js runtime&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Use Node.js ${{ matrix.node-version }}&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.node-version }}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cache dependencies&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cache&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v2&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./node_modules&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;modules-${{ hashFiles('package-lock.json') }}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.cache.outputs.cache-hit != 'true'&lt;/span&gt; &lt;span class="c1"&gt;# only install if package-lock has changed&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci --ignore-scripts&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run lint&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run compile&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Check out &lt;a href="https://www.voorhoede.nl/en/blog/super-fast-npm-install-on-github-actions/"&gt;this article&lt;/a&gt; for details on the technique I used here to cache the NPM install node_modules between executions, which was the single slowest step in my workflow, often 20–30 seconds).&lt;/p&gt;

&lt;h2&gt;
  
  
  Mainline Workflow (CI + CD)
&lt;/h2&gt;

&lt;p&gt;The mainline workflow is responsible for getting a developer's change (which has passed automated CI and human code review checks) through further stages and eventually into production.&lt;/p&gt;

&lt;p&gt;To implement this, I use AWS CodePipeline for orchestrating the flow and AWS CodeBuild for running the deployment and test tasks. Since these services are already tightly integrated into the AWS ecosystem, there is less effort and security risk in performing the deployment (e.g. I don't need to give an external service access to my production AWS account).&lt;/p&gt;

&lt;p&gt;Here's what my flow looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rm3pt7SK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/aws-serverless-cicd-mainline-pipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rm3pt7SK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/aws-serverless-cicd-mainline-pipeline.png" alt="Mainline pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll notice that there are 3 stages: &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt;. I typically deploy each of these stages to their own AWS account. While this isn't a must-have for the pre-production environments, the account boundary provides the best isolation between environments so you can be more confident that deployments and tests run in each environment don't interfere with each other. And you should always isolate &lt;code&gt;prod&lt;/code&gt; in its own AWS account.&lt;/p&gt;

&lt;p&gt;There are a few assumptions my pipeline is making here that you should be aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The dev team is following a release-from-mainline &lt;a href="https://trunkbaseddevelopment.com"&gt;trunk-based development&lt;/a&gt; Git branching model. If this isn't the case for your team, most of what I'm recommending here could be adapted for a &lt;a href="https://trunkbaseddevelopment.com/branch-for-release/"&gt;branch-for-release&lt;/a&gt; model.&lt;/li&gt;
&lt;li&gt;The app is being deployed monolithically from a monorepo, i.e. all resources within the system are deployed at the same time. I find monolithic deployments are the easiest to manage within the client teams and products I work with. If your system has multiple microservices AND you wish to deploy these independently based on what code changed, then you'll need multiple pipelines which will be significantly more complex than what I'm proposing here (check out Four Theorem's &lt;a href="https://github.com/fourTheorem/slic-starter#cicd"&gt;SLIC Starter&lt;/a&gt; for a great example of CI/CD pipeline for independently deployed serverless microservices).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next sections, I'll walk through each stage in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build stage
&lt;/h3&gt;

&lt;p&gt;The build stage repeats the static analysis and unit test tasks performed in the GitHub Actions Pull Request Workflow. This is to double check that the merge hasn't introduced any issues.&lt;/p&gt;

&lt;p&gt;You might be surprised to notice that there is no packaging step here. The reason for this is because unfortunately Serverless Framework (which I use to define and deploy the infrastructure resources) doesn't support building an environment-independent deployment artifact which can then flow through each stage in the pipeline. This means that the "Deploy stacks" task that you see in the later stages creates an environment-specific package which it deploys to the target account. (General consensus &lt;a href="https://twitter.com/paulswail/status/1374713404921905155"&gt;among experienced serverless devs&lt;/a&gt; is that this limitation isn't anything to worry about).&lt;/p&gt;

&lt;h3&gt;
  
  
  Test stage
&lt;/h3&gt;

&lt;p&gt;The purpose of the &lt;code&gt;test&lt;/code&gt; stage is to provide an isolated environment that only this CI/CD pipeline has access to for automated E2E testing. No human testers or other systems have access to this environment.&lt;/p&gt;

&lt;p&gt;The first step at this stage is a &lt;strong&gt;configuration test&lt;/strong&gt;. While the vast majority of the configuration will be contained in the version-controlled source code (because &lt;a href="https://serverlessfirst.com/iac-linchpin/"&gt;every resource is defined in Infrastructure-as-Code&lt;/a&gt;), the one thing that's not in Git is &lt;strong&gt;secrets&lt;/strong&gt;. Secrets such as passwords, API keys, etc, that are required by Lambda functions should be stored in SSM Parameter Store (or Secrets Manager) and fetched at runtime. However, the deployment of the values for these secrets needs to be performed by a human engineer ahead of time, and this is something which they could forget to do! This check uses a &lt;code&gt;secrets.sample.env&lt;/code&gt; file stored in the Git repo and checks that each key defined in it has an associated parameter set within SSM Parameter Store for the target environment. If this check fails, deployment does not proceed.&lt;/p&gt;

&lt;p&gt;After the config check comes the deployment itself, using the &lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/deploy/"&gt;&lt;code&gt;sls deploy&lt;/code&gt;&lt;/a&gt; command. I usually have 2 stacks to deploy (an &lt;code&gt;infra&lt;/code&gt; stack containing stateful resources such as DynamoDB tables and S3 buckets and an &lt;code&gt;api&lt;/code&gt; stack containing APIGW or AppSync endpoints along with Lambda functions), which are deployed in series.&lt;/p&gt;

&lt;p&gt;Once the deployment completes, the &lt;strong&gt;E2E tests&lt;/strong&gt; are run using Jest. Before running these tests, the &lt;a href="https://serverlessfirst.com/cloud-config-local-tests/"&gt;&lt;code&gt;serverless-export-env&lt;/code&gt;&lt;/a&gt; plugin is used to generate a &lt;code&gt;.env&lt;/code&gt; file with all the URLs, etc from the deployment so that the tests know which endpoints to hit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Staging stage
&lt;/h3&gt;

&lt;p&gt;The purpose of the &lt;code&gt;staging&lt;/code&gt; stage is to act as an environment where human testers can perform manual testing before releasing changes to production. If you're building an API and you have separate front-end (web or mobile) teams, they can point their pre-production apps at this environment as it will be relatively stable, since code that gets this far has passed full E2E testing in the &lt;code&gt;test&lt;/code&gt; stage.&lt;/p&gt;

&lt;p&gt;We still do need to run a few tests at this stage though. Before deployment, we need to again perform a config check to ensure that secrets have been deployed correctly.&lt;/p&gt;

&lt;p&gt;Post-deployment, we run a set of &lt;strong&gt;smoke tests&lt;/strong&gt;. The main purpose of these smoke tests is to ensure that the system is available and that environment-specific configuration is correct.&lt;/p&gt;

&lt;p&gt;For example, if a Lambda function talks to a third party API, we could write a smoke test that invokes the function and verifies that it communicates correctly with this service. This would verify that the API URL and API keys we're using are correct.&lt;/p&gt;

&lt;p&gt;An important aspect of smoke tests (which differentiates them from E2E tests) is that they should have minimal side effects. Ideally they would be fully readonly but at the very least they should not introduce any test data into the system which could become visible to human users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual approval
&lt;/h3&gt;

&lt;p&gt;Once the &lt;code&gt;staging&lt;/code&gt; deployment and tests complete, a notification is sent to the team to manually approve. This is an opportunity to perform any manual testing before proceeding to production.&lt;/p&gt;

&lt;p&gt;Once satisfied with the system, you can use the CodePipeline console to give approval and the pipeline will move onto the &lt;code&gt;prod&lt;/code&gt; stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Production stage
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;prod&lt;/code&gt; stage is the live environment that your users will access. The steps at this stage are the same as those in the &lt;code&gt;staging&lt;/code&gt; stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of test types
&lt;/h2&gt;

&lt;p&gt;So now we've reach the end of the pipeline, let's summarise the different types of tests we've used and where they were employed:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table class="border-collapse font-sans border border-gray-300 text-base"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th class="border border-gray-300 p-2"&gt;Test&lt;/th&gt;
&lt;th class="border border-gray-300 p-2"&gt;Description&lt;/th&gt;
&lt;th class="border border-gray-300 p-2"&gt;When to run&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td class="border border-gray-300 p-2"&gt;Static analysis&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Lint and compile (&lt;code&gt;tsc&lt;/code&gt;)&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;At start of PR and mainline pipelines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="border border-gray-300 p-2"&gt;Unit&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Verifies complex business logic. Has no out-of-process runtime dependencies&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;After static analysis at start of PR and mainline pipelines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="border border-gray-300 p-2"&gt;Config&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Verifies presence of non-version controlled configuration (e.g. secrets)&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Pre-deployment, in &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt; stages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="border border-gray-300 p-2"&gt;E2E&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Verifies behaviour of deployed system, e.g. by invoking API endpoints, deployed Lambda functions, etc&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Post-deployment, in &lt;code&gt;test&lt;/code&gt; stage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class="border border-gray-300 p-2"&gt;Smoke&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Verifies system availability and correctness of environment-specific config&lt;/td&gt;
&lt;td class="border border-gray-300 p-2"&gt;Post-deployment, in &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt; stages&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Building out your own pipeline
&lt;/h2&gt;

&lt;p&gt;This may seem like a lot of checks to create your own pipeline and maybe you don't have the time to build all this upfront. But don't let this put you off as having ANY tests running in a CI/CD pipeline is better than none. I've seen many teams that do no CI/CD at all and just deploy directly from developer workstations (and &lt;a href="https://twitter.com/chrismunns/status/1374742998282555394?s=20"&gt;I'm not the only one&lt;/a&gt;!).&lt;/p&gt;

&lt;p&gt;To get you started, the Pull Request Workflow is much easier to configure as it has less moving parts. The limitation is that you can only do CI and not CD.&lt;/p&gt;

&lt;p&gt;Once you move on to the mainline CI+CD pipeline, consider adding each type of test incrementally to your pipeline. Here's my recommended order for building out each one:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Static analysis&lt;/strong&gt; — Start with these as they're quick to run, catch a lot of silly mistakes early in the cycle, and are easy to implement as they just have a single config file at the repo-level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E2E tests&lt;/strong&gt; — These are the &lt;a href="https://serverlessfirst.com/integration-e-2-e-tests/"&gt;biggest confidence drivers for serverless apps&lt;/a&gt;, so once these run successfully you can have much greater assurance doing Continuous Delivery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit tests&lt;/strong&gt; — Unit tests are usually fast to run and a good fit for testing complex business logic inside a Lambda function. I've ranked them behind E2E tests as I generally find my use cases don't require many of them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smoke tests&lt;/strong&gt; — If you're doing manual testing, these are probably lower priority&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Config tests&lt;/strong&gt; — These checks would be lowest priority as your E2E or smoke tests should capture any missing/bad configuration. The major benefit config tests bring is not increased confidence but instead a faster failure since they can be run before the (usually slow) deployment step.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;If you're interested in getting hands-on practice in creating the types of tests discussed in this article, check out my 4-week &lt;a href="https://serverlessfirst.com/workshops/testing/"&gt;Serverless Testing Workshop&lt;/a&gt;. The workshop is a mixture of self-paced video lessons alongside weekly live group sessions where you will join me and other engineers to discuss and work through different testing scenarios.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>testing</category>
      <category>devops</category>
    </item>
    <item>
      <title>Integration and E2E tests are the primary confidence drivers for serverless apps</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Tue, 27 Oct 2020 08:47:12 +0000</pubDate>
      <link>https://dev.to/paulswail/integration-and-e2e-tests-are-the-primary-confidence-drivers-for-serverless-apps-4loj</link>
      <guid>https://dev.to/paulswail/integration-and-e2e-tests-are-the-primary-confidence-drivers-for-serverless-apps-4loj</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is part of a series on “Testing Serverless Applications” based on lessons I’m teaching in the &lt;a href="https://serverlessfirst.com/workshops/testing/"&gt;Serverless Testing Workshop&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://serverlessfirst.com/testing-tradeoff-triangle/"&gt;last article&lt;/a&gt;, we covered the three core goals driving &lt;em&gt;why&lt;/em&gt; we write automated tests and the trade-offs that we need to make in order to reach satisfactory levels of confidence, maintainability and feedback loop speed.&lt;/p&gt;

&lt;p&gt;You may have been left wondering &lt;em&gt;"how do I get to a sufficient level of confidence?"&lt;/em&gt;. A typical answer to this could be along the lines of: &lt;em&gt;"write a load of unit tests and aim for as close to 100% code coverage as possible"&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And while unit tests and automated code coverage measurement can be part of building up confidence in your serverless system, in my experience they are a small part.&lt;/p&gt;

&lt;p&gt;In this article, I'll argue that you should favour integration and end-to-end (E2E) tests for maximising the confidence delivered by your automated test suite.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can go wrong?
&lt;/h2&gt;

&lt;p&gt;Before deciding what tests to write for a given use case, &lt;strong&gt;we need to first understand our failure modes&lt;/strong&gt;. A failure mode is a specific way in which a system-under-test can fail, and it can have any number of root causes. If we know what these modes are, then we can write tests to "cover" as many of the failure modes as is feasible.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Aside: There are many failure modes for which we shouldn't write automated tests that run in pre-production environments as they're too costly/difficult to simulate in a test. For these we can instead employ &lt;a href="https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1"&gt;testing in production techniques&lt;/a&gt; to detect.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's illustrate this with an example...&lt;/p&gt;

&lt;p&gt;Consider the &lt;a href="https://www.jeremydaly.com/serverless-microservice-patterns-for-aws/#simplewebservice"&gt;Simple Web Service pattern&lt;/a&gt; from Jeremy Daly's excellent collection of Serverless microservice patterns (that I must've now referenced about 5,932 times in previous articles):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4tM-xYeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-web-service.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4tM-xYeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-web-service.png" alt="Simple serverless web service pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pattern is by far the most common in serverless applications that I've built and involves a synchronous request-response cycle from API Gateway thru a single-purpose Lambda function thru DynamoDB and back again.&lt;/p&gt;

&lt;p&gt;Let's make our example more concrete and say that we're adding a new REST API endpoint for a sports club management app that allows managers to create a new club: &lt;code&gt;POST /clubs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So what could go wrong here? What tests should we write? Let's look at the deployment landscape for this use case to help answer these:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hEXZ8YsA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-webservice-pattern-source-areas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hEXZ8YsA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-webservice-pattern-source-areas.png" alt="Source areas of failure modes for the simple serverless web service pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The light red boxes depict source areas made up of code or configuration that we (as the developer) write and which are then deployed to a target environment.&lt;/p&gt;

&lt;p&gt;Each source area can be a cause of several failure modes. Let's look through each in turn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API GW Method config&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Path or method configured incorrectly (e.g. wrong case for path parameter)&lt;/li&gt;
&lt;li&gt;Missing or incorrect CORS&lt;/li&gt;
&lt;li&gt;Missing or incorrect Authorizer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API GW to Lambda integration config&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;IAM permission for APIGW to call Lambda function&lt;/li&gt;
&lt;li&gt;Misconfigured reference to Lambda function&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda execution config&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Insufficient memory allocated&lt;/li&gt;
&lt;li&gt;Timeout too low&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handler function code&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Business logic bug, e.g. client request validated incorrectly or user incorrectly authorized&lt;/li&gt;
&lt;li&gt;Incorrect mapping of in-memory object fields to DynamoDB item attributes (e.g. compound fields used for GSI index fields in &lt;a href="https://serverlessfirst.com/dynamodb-modelling-single-vs-multi-table/"&gt;single table designs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda to DDB integration config&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Incorrect IAM permissions for Lambda function role to make request to DynamoDB&lt;/li&gt;
&lt;li&gt;Incorrect name/ARN of DynamoDB table&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB table config&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Incorrect field name or type given for hash or sort keys of a table or GSI when table was provisioned&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Covering these failure modes with tests
&lt;/h2&gt;

&lt;p&gt;If you look through each failure listed above, very few of them can be covered by standard in-memory unit tests. They're predominantly integration configuration concerns.&lt;/p&gt;

&lt;p&gt;The diagram below shows the coverage scope and System-under-test (SUT) entrypoint for each of the three test granularities (unit, integration and E2E):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A4z3LRu2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-webservice-pattern-test-scopes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A4z3LRu2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://serverlessfirst.com/img/blog-images/simple-webservice-pattern-test-scopes.png" alt="Scopes of different test levels for the simple serverless web service pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An important point for this particular use case is that each test level covers everything that the level below covers, i.e. integration tests will cover everything that unit tests do and E2E tests will cover everything that integration tests do.&lt;/p&gt;

&lt;p&gt;So then why not &lt;em&gt;only&lt;/em&gt; write E2E tests? Let's look at the key properties of each type of test for our example use case to see what they have to offer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests:

&lt;ul&gt;
&lt;li&gt;Test and Lambda handler code execute on local machine&lt;/li&gt;
&lt;li&gt;SUT entrypoint could either be the Lambda handler function or modules used within the handler&lt;/li&gt;
&lt;li&gt;No deployment required — uses &lt;a href="https://www.martinfowler.com/bliki/TestDouble.html"&gt;test doubles&lt;/a&gt; to stub out calls to DynamoDB&lt;/li&gt;
&lt;li&gt;Can use automated code coverage tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Integration tests:

&lt;ul&gt;
&lt;li&gt;Test and Lambda handler code execute on local machine&lt;/li&gt;
&lt;li&gt;SUT entrypoint is typically the Lambda handler function&lt;/li&gt;
&lt;li&gt;Requires partially deployed environment (DynamoDB table)&lt;/li&gt;
&lt;li&gt;Can use automated code coverage tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;E2E tests:

&lt;ul&gt;
&lt;li&gt;Only test code runs on local machine&lt;/li&gt;
&lt;li&gt;SUT entrypoint is the APIGW endpoint in the cloud which receives a HTTP request from our test&lt;/li&gt;
&lt;li&gt;Requires fully deployed environment (APIGW resources, Lambda function, IAM roles and DynamoDB table)&lt;/li&gt;
&lt;li&gt;Cannot use automated code coverage tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key benefit that integration tests give over E2E tests is that they allow for much faster iterative development. Their deployment overhead is low (one-off provisioning of a DynamoDB table) whereas E2E tests require deploying the most frequently changing areas (the handler code) every time.&lt;/p&gt;

&lt;p&gt;And it's also possible to craft different event inputs for an integration test in order to test the business logic failure modes that you might otherwise consider writing a unit test for. For single-purpose Lambda functions with minimal business logic such as this, I often don't bother writing any unit tests and just cover off the business logic inside my integration tests.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Aside: Given the similarity between the entrypoint input events to the E2E and integration tests (a HTTP request and an APIGatewayProxyEvent), it's possible to get 2-for-1 when writing your test cases whereby a single test case can be run in one of two modes by switching an environment variable. I cover this technique in my &lt;a href="https://serverlessfirst.com/workshops/testing/"&gt;workshop&lt;/a&gt; and I'll try to write more on it soon.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've looked at the Simple Web Service pattern here but I've found value in leading with integration and E2E tests for almost any serverless use case that involves the triggering of a single-purpose Lambda function that calls on to a single downstream AWS service, e.g. S3-&amp;gt; Lambda-&amp;gt; DynamoDB, or SNS -&amp;gt; Lambda -&amp;gt; SES.&lt;/p&gt;

&lt;p&gt;You definitely still can write unit tests &lt;em&gt;in addition to&lt;/em&gt; integration and E2E tests (and there are many valid reasons why you might still want to do so). But my point here is that you &lt;em&gt;may not have to&lt;/em&gt; for many serverless use cases where business logic is often simple or non-existent.&lt;/p&gt;

&lt;p&gt;Unit tests are no longer the significant drivers of confidence from your test suite that they once were in server-based monolithic apps. &lt;strong&gt;In serverless apps, integration and E2E tests are king.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're interested in learning more about testing serverless apps, check out my 4-week &lt;a href="https://serverlessfirst.com/workshops/testing/"&gt;Serverless Testing Workshop&lt;/a&gt;, starting on November 2nd, 2020. The workshop will be a mixture of self-paced video lessons alongside weekly live group sessions where you will join me and other engineers to discuss and work through different testing scenarios. If you &lt;a href="https://serverlessfirst.com/workshops/testing/"&gt;sign up&lt;/a&gt; by October 28th you'll get a 25% earlybird discount.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>testing</category>
    </item>
    <item>
      <title>The testing trade-off triangle</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Wed, 14 Oct 2020 19:39:35 +0000</pubDate>
      <link>https://dev.to/paulswail/the-testing-trade-off-triangle-4g83</link>
      <guid>https://dev.to/paulswail/the-testing-trade-off-triangle-4g83</guid>
      <description>&lt;p&gt;Writing automated tests for serverless applications, indeed for any type of software application, can be somewhat of an art. Answers to the following commonly asked questions will be highly contextual and unique to an individual project or even a component/use case within a project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many tests should I write?&lt;/li&gt;
&lt;li&gt;What type of tests (unit, integration, E2E) should I write?&lt;/li&gt;
&lt;li&gt;How can I make these tests run faster?&lt;/li&gt;
&lt;li&gt;Can/should I run these tests on my local machine?&lt;/li&gt;
&lt;li&gt;What test coverage do we need? And how do I measure it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have observed a few different schools of thought recently on the best way to approach testing serverless applications amongst respected practitioners. These schools of thought are roughly split across two dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local vs cloud-based testing&lt;/li&gt;
&lt;li&gt;The weights given to each test granularity (unit, integration and end-to-end) within your test portfolio (e.g. the &lt;a href="https://martinfowler.com/bliki/TestPyramid.html" rel="noopener noreferrer"&gt;traditional test pyramid&lt;/a&gt; vs the &lt;a href="https://engineering.atspotify.com/2018/01/11/testing-of-microservices/" rel="noopener noreferrer"&gt;microservices test honeycomb&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have some thoughts on these based on my own experiences but I'll save those for a future article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get back to the basics of testing
&lt;/h2&gt;

&lt;p&gt;What I do find useful to help answer the above questions for a specific context is to go up a level and first remind myself of &lt;em&gt;why&lt;/em&gt; we write automated tests in the first place.&lt;/p&gt;

&lt;p&gt;The first and foremost objective is &lt;strong&gt;confidence&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The tests should give sufficient confidence to the relevant stakeholders (engineers, product managers, business folks, end users) that the system is behaving as expected. This allows for decisions (automated or otherwise) to be made about the releasability of the system (or subcomponents of it). To do this well, the engineer authoring the tests needs a good knowledge of both the functional requirements and the deployment landscape (e.g. the AWS cloud services) and its failure modes.&lt;/p&gt;

&lt;p&gt;Following on from this core objective of confidence, I see two further objectives for an automated test suite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feedback loop&lt;/strong&gt;: the faster we learn about a failure and are able to identify its root cause, the faster a fix can be put in place. This feedback loop is relevant both in the context of a developer's iterative feature dev process inside their own individual environment and also within shared environments (test, staging, prod) as part of a CI/CD pipeline. Both the speed and quality of the feedback related to a test failure are important here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: How long does it take to write the tests in the first place? Are our tests making the system easier or more difficult to maintain in the long term? Do they require a lot of hand-holding/patching due to transient issues? Do the tests help new developers understand the system better or do they just add to their overwhelm?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The tension between each objective
&lt;/h2&gt;

&lt;p&gt;So we've reflected on the three high-level objectives for why we write tests in the first place. These objectives aren't boolean but are instead on a sliding scale—a test needs to have "enough" of each level to be of net positive value and therefore be worth keeping (or writing in the first place).&lt;/p&gt;

&lt;p&gt;Secondly, and crucially, each objective is often in conflict with at least one of the other two.&lt;/p&gt;

&lt;p&gt;This is what I call the &lt;strong&gt;testing trade-off triangle&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-triangle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-triangle.png" alt="Testing trade-off triangle"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The general principle is that when deciding upon your testing approach for a new system (or a new feature/change to an existing system), &lt;strong&gt;you need to make a compromise between confidence level, feedback loop and maintainability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I believe this trade-off holds true for all categories of software development projects, not just serverless systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 1: Optimise for maximum confidence
&lt;/h2&gt;

&lt;p&gt;Say that for every system use case, you write an exhaustive suite of tests covering every eventuality, both functional and environmental.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-example-confidence.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-example-confidence.png" alt="Testing trade-offs: optimising for confidence"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By doing this you're stealing from the maintainability pot as the time taken to write all these tests will mean you can't meet any deadlines that most businesses inevitably have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 2: Optimise solely for fast feedback
&lt;/h2&gt;

&lt;p&gt;Take another example where you optimise for the feedback loop objective by writing all your tests as lightning fast, in-memory unit tests, swapping out integrations with cloud services with a form of &lt;a href="https://www.martinfowler.com/bliki/TestDouble.html" rel="noopener noreferrer"&gt;test double&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-example-feedbackloop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fserverlessfirst.com%2Fimg%2Fblog-images%2Ftesting-tradeoff-example-feedbackloop.png" alt="Testing trade-offs: optimising for feedback loop"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The loser here is the confidence objective. While we gain confidence that each individual component works as intended (and can iterate really quickly locally without making cloud round-trips), we don't get any confidence that the components will work together when deployed to a full environment (&lt;a href="https://duckduckgo.com/?q=integration+test+unit+test+meme&amp;amp;t=osx&amp;amp;ia=images&amp;amp;atb=v179-1&amp;amp;iax=images" rel="noopener noreferrer"&gt;you've seen the memes!&lt;/a&gt;). And in serverless applications—where these components and their associated integration points are more numerous than ever—this is even more problematic.&lt;/p&gt;

&lt;p&gt;(You could also make a case that the maintainability aspect would suffer in this example, since identifying, injecting and maintaining a suitable test double can take time to ensure it always matches the behaviour of the real system.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Work out how to define "just enough" for your context
&lt;/h2&gt;

&lt;p&gt;The next step in deciding how to approach testing for your specific context is to go through your use cases and for each one attempt to define what constitutes "just enough" for the three high-level testing objectives. This means asking yourself questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What behaviours or integration points do we need to test?&lt;/li&gt;
&lt;li&gt;What behaviours or integration points can we get away without testing?&lt;/li&gt;
&lt;li&gt;Where is a fast feedback loop critical? And exactly how fast does this need to be?&lt;/li&gt;
&lt;li&gt;Where can we tolerate a slower feedback loop?&lt;/li&gt;
&lt;li&gt;Can we cover this test case with an integration (or even E2E) test or should we also spend time write unit tests for it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many of these answers will be specific to your organisational and functional requirements. But the good news with serverless applications on AWS is that there are general patterns for testing common use cases based on the AWS services being used that you can use to inform your decisions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're interested in learning more about these patterns, I will be covering several of them in the 4-week &lt;a href="https://serverlessfirst.com/workshops/testing/" rel="noopener noreferrer"&gt;Serverless Testing Workshop&lt;/a&gt;, starting on November 2nd, 2020. The workshop will be a mixture of self-paced video lessons alongside weekly live group sessions where you will join me and other engineers to discuss and work through different testing scenarios. If you &lt;a href="https://serverlessfirst.com/workshops/testing/" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; before October 26th you'll get a 25% earlybird discount.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://serverlessfirst.com/pains-testing-serverless/" rel="noopener noreferrer"&gt;The Pains of Testing Serverless Applications&lt;/a&gt; by Paul Swail&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.symphonia.io/posts/2020-08-19_serverless_testing" rel="noopener noreferrer"&gt;Serverless, Testing and two Thinking Hats&lt;/a&gt; by Mike Roberts&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://martinfowler.com/bliki/TestPyramid.html" rel="noopener noreferrer"&gt;Test pyramid&lt;/a&gt; by Martin Fowler&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://engineering.atspotify.com/2018/01/11/testing-of-microservices/" rel="noopener noreferrer"&gt;Test honeycomb for testing microservices&lt;/a&gt; by Andre Schaffer &amp;amp; Rickard Dybeck (Spotify Engineering)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>testing</category>
    </item>
    <item>
      <title>Add type definitions to your Lambda functions</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 20 Mar 2020 08:36:48 +0000</pubDate>
      <link>https://dev.to/paulswail/add-type-definitions-to-your-lambda-functions-7ca</link>
      <guid>https://dev.to/paulswail/add-type-definitions-to-your-lambda-functions-7ca</guid>
      <description>&lt;p&gt;&lt;em&gt;Every Friday, I will share a small tip with you on something Lambda/FaaS-related. Because Fridays are fun, and so are functions.&lt;/em&gt; 🥳&lt;/p&gt;

&lt;p&gt;Early last year, I tried out TypeScript in place of JavaScript after several years of holding off. Now I couldn’t go back to plain JS! I’ll save going into all the reasons why I think you should strongly consider TypeScript for Lambda-based apps for a future article, but for today’s tip I want to show you the &lt;a href="https://www.npmjs.com/package/@types/aws-lambda"&gt;&lt;code&gt;@types/aws-lambda&lt;/code&gt;&lt;/a&gt; type definitions library.&lt;/p&gt;

&lt;h2&gt;
  
  
  What problem does this solve?
&lt;/h2&gt;

&lt;p&gt;Lambda functions can have loads of different trigger sources. Each source has its own unique event parameter and response payload schema. Having to look up the docs for each one can be a PITA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jdH5qBhP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/aws-console-lambda-event-sources.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jdH5qBhP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/aws-console-lambda-event-sources.gif" alt="Long list of Lambda event triggers from AWS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;@types/aws-lambda&lt;/code&gt; library gives you handler, event, context and response definitions for most of the major services that can trigger a Lambda function invocation.&lt;/p&gt;

&lt;p&gt;By using type definitions, you get autocomplete and type checking built into your IDE. As well as making your initial authoring faster, this also helps you uncover stupid mistakes as you type them instead of having to wait until your code is run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some examples
&lt;/h2&gt;

&lt;p&gt;Let’s look at a few different event trigger handlers to see how we can add type definitions to them.&lt;/p&gt;

&lt;p&gt;Before we do, install the NPM package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @types/aws-lambda &lt;span class="nt"&gt;--save-dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s how you add type defs to your API Gateway proxy handler function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayProxyHandler&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-lambda&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayProxyHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Received event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Success&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No longer will you forget to &lt;code&gt;JSON.stringify&lt;/code&gt; the &lt;code&gt;body&lt;/code&gt; field in your API Gateway proxy response as you’ll now get a compiler error if you don’t assign a string to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qnB3SZ6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/apigw-handler-type-error.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qnB3SZ6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/apigw-handler-type-error.gif" alt="API Gateway handler type error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wiring up type definitions to an SNS handler is similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SNSHandler&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-lambda&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SNSHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;Sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;received message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No longer will you have to remember or google for the deep selector path to get at the body of your SNS or SQS message as you can easily navigate the &lt;code&gt;event&lt;/code&gt; object inside your IDE:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YSATpdwr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/sns-handler-type-autocomplete.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YSATpdwr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://winterwindsoftware.com/img/blog-images/sns-handler-type-autocomplete.gif" alt="SNS handler type autocomplete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s important to note that this is a purely compile-time library and emits no run-time Javascript after transpilation. For example, unlike fully static typed languages like C# and Java, you won’t get a casting error if you happen to specify the wrong type on your event parameter.&lt;/p&gt;

&lt;p&gt;💌 &lt;em&gt;If you enjoyed this article, you can &lt;a href="https://winterwindsoftware.com/newsletter/"&gt;sign up to my newsletter&lt;/a&gt;. I send emails every weekday where I share my guides and deep dives on building serverless solutions on AWS with hundreds of developers and architects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/fff-aws-lambda-type-definitions/"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Optimise your Lambda functions using Webpack</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 13 Mar 2020 13:54:51 +0000</pubDate>
      <link>https://dev.to/paulswail/optimise-your-lambda-functions-using-webpack-2min</link>
      <guid>https://dev.to/paulswail/optimise-your-lambda-functions-using-webpack-2min</guid>
      <description>&lt;p&gt;&lt;em&gt;Every Friday, I will share a small tip with you on something Lambda/FaaS-related. Because Fridays are fun, and so are functions.&lt;/em&gt; 🥳&lt;/p&gt;

&lt;p&gt;Today we'll cover why and how to package your Node.js Lambda functions for deployment using Webpack and the Serverless Framework. This is an approach I take for all my Lambda function development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What problem does this solve?
&lt;/h2&gt;

&lt;p&gt;The primary goal of using Webpack is to reduce the amount of code contained in the zip artifact that is uploaded when your Lambda function is being deployed. This has the benefit of reducing cold start times whenever your function code is loaded into memory.&lt;/p&gt;

&lt;p&gt;It also has some secondary benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower security risk as only the required parts of third party modules are deployed rather than the entire contents of the &lt;code&gt;node_modules&lt;/code&gt; folder.&lt;/li&gt;
&lt;li&gt;Transpilers such as TypeScript and Babel can be easily hooked into the build process via Webpack loaders.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;If you've used Webpack in the past for front-end development, you might already know that (amongst other things) it's used to bundle multiple client-side JavaScript modules into a single file. You may not know that it can be used to do the same thing with server-side Node.js modules (it's just JavaScript after all).&lt;/p&gt;

&lt;p&gt;It works by first configuring an entrypoint function, which in our case will be the Lambda handler function. Starting with this function, it proceeds with a static analysis checking for &lt;code&gt;require&lt;/code&gt; and &lt;code&gt;import&lt;/code&gt; statements, following each path to other files as needed. It bundles up each file into its own module within a single file. Webpack uses a technique called &lt;a href="https://bitsofco.de/what-is-tree-shaking/"&gt;treeshaking&lt;/a&gt; to eliminate dead code and only import the specific functions from a module that are referenced by your application code.&lt;/p&gt;

&lt;p&gt;You might also know that Webpack can be pretty complex to configure! Don't worry though, our configuration will be simple and we'll be using the &lt;a href="https://github.com/serverless-heaven/serverless-webpack"&gt;&lt;code&gt;serverless-webpack&lt;/code&gt; plugin&lt;/a&gt; to help us.&lt;/p&gt;

&lt;p&gt;This plugin allows us to create optimised individual bundles for each Lambda function in our service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting it up
&lt;/h2&gt;

&lt;p&gt;You can follow the detailed instructions on the &lt;a href="https://github.com/serverless-heaven/serverless-webpack/"&gt;&lt;code&gt;serverless-webpack&lt;/code&gt; plugin README&lt;/a&gt;, but here's a quick run-through of my standard setup. I'm assuming you already have the &lt;a href="https://serverless.com/framework"&gt;Serverless Framework&lt;/a&gt; installed and an existing &lt;code&gt;serverless.yml&lt;/code&gt; file in place.&lt;/p&gt;

&lt;p&gt;Install the plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;serverless-webpack &lt;span class="nt"&gt;--save-dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Add the following sections to your &lt;code&gt;serverless.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# serverless.yml&lt;/span&gt;

&lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;webpack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;includeModules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;package&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;individually&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-webpack&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This specifies that separate zip files for each individual function should be created rather than one for the whole service. It also tells the plugin to not package the &lt;code&gt;node_modules&lt;/code&gt; folder in the zip but instead to trust Webpack to discover all the required modules itself and bundle them into a single &lt;code&gt;.js&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Now create a file called &lt;code&gt;webpack.config.js&lt;/code&gt; in the same folder as your &lt;code&gt;serverless.yml&lt;/code&gt; file. Here's what mine typically looks like for plain JavaScript projects (Typescript requires a bit more config):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// webpack.config.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;slsw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;serverless-webpack&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;slsw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;slsw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;webpack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isLocal&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;development&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;minimal&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;devtool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nosources-source-map&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;performance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;hints&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.jsx&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;libraryTarget&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commonjs2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.webpack&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[name].js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sourceMapFilename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[file].map&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's all the config done.&lt;/p&gt;

&lt;p&gt;To view the results, you can package your service without deploying by running &lt;code&gt;serverless package&lt;/code&gt; in your terminal. Then open the &lt;code&gt;./.serverless&lt;/code&gt; folder and look at the zip files that have been created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling edge cases
&lt;/h2&gt;

&lt;p&gt;You may need to stray from the above configuration if your Lambda function references a module that is required at runtime but Webpack cannot discover it during its analysis. The most common cause of this is when the module contains dynamic requires, whereby the path string passed into the &lt;code&gt;require&lt;/code&gt; statement is composed at runtime. If this is the case, you can configure &lt;code&gt;serverless-webpack&lt;/code&gt; to use &lt;a href="https://github.com/serverless-heaven/serverless-webpack#forced-inclusion"&gt;forced inclusion&lt;/a&gt; to always include specific modules in its bundle.&lt;/p&gt;

&lt;p&gt;💌 &lt;em&gt;If you enjoyed this article, you can &lt;a href="https://winterwindsoftware.com/newsletter/"&gt;sign up to my newsletter&lt;/a&gt;. I send emails every weekday where I share my guides and deep dives on building serverless solutions on AWS with hundreds of developers and architects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/fff-webpacking-lambdas/"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>node</category>
    </item>
    <item>
      <title>Async Initialisation of a Lambda Handler</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 06 Mar 2020 20:32:58 +0000</pubDate>
      <link>https://dev.to/paulswail/async-initialisation-of-a-lambda-handler-2bc</link>
      <guid>https://dev.to/paulswail/async-initialisation-of-a-lambda-handler-2bc</guid>
      <description>&lt;p&gt;&lt;em&gt;Each Friday, I will share a small tip with you on something Lambda/FaaS-related. Because Fridays are fun, and so are functions.&lt;/em&gt; 🥳&lt;/p&gt;

&lt;p&gt;Today we'll cover how to perform some asynchronous initialisation outside of your Lambda handler in Node.js.&lt;/p&gt;

&lt;p&gt;For example, you may need to fetch configuration data from SSM Parameter Store or S3 that the main body of your function depends upon.&lt;/p&gt;

&lt;p&gt;There are a few points to consider here before we start coding:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Our initialisation code should only be executed once — on the first "cold start" execution.&lt;/li&gt;
&lt;li&gt;Our initialisation data may not be loaded by the time the execution of the handler function body starts.&lt;/li&gt;
&lt;li&gt;JavaScript does not allow &lt;code&gt;await&lt;/code&gt; calls to be defined at the root level of a module. They must happen inside a function marked as &lt;code&gt;async&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If our Lambda function has Provisioned Currency enabled, you want this initialisation to be performed during the background warming phase and not when the function is serving an actual request.&lt;/li&gt;
&lt;li&gt;If our initialisation code fails, it should be re-attempted on subsequent invocation as the first failure could be due to a transient issue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's jump to the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Perform any async calls here to fetch config data.&lt;/span&gt;
  &lt;span class="c1"&gt;// We'll just dummy up a fake promise as a simulation.&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fetching config data...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;myVar1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;abc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;myVar2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;xyz&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;initPromise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;init&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Ensure init has completed before proceeding&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;functionConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;initPromise&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// Start your main handler logic...&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;functionConfig is set:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;functionConfig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;init&lt;/code&gt; function is responsible for asynchronously fetching an object containing all configuration data required for the function. Note that it is triggered as soon as the module is loaded and not inside the &lt;code&gt;handler&lt;/code&gt; function. This ensures that the config is fetched as early as possible. It also should ensure that this initialisation processing will happen in the warming phase of a function with Provisioned Concurrency enabled.&lt;/p&gt;

&lt;p&gt;The second key point here is that a promise returned from the &lt;code&gt;init&lt;/code&gt; function is stored at the module scope and then &lt;code&gt;await&lt;/code&gt;ed upon inside the &lt;code&gt;handler&lt;/code&gt;. This ensures that your function can safely continue. Subsequent invocations will proceed immediately as they will be &lt;code&gt;await&lt;/code&gt;ing on an already resolved promise.&lt;/p&gt;

&lt;p&gt;So far we've covered off requirements 1–4 from our list above. But what about #5?&lt;/p&gt;

&lt;p&gt;What if an error occurs when loading the config data due to some transient issue and the &lt;code&gt;init&lt;/code&gt; function rejects? That would mean that all subsequent executions will keep failing and you'd have a dead Lambda function container hanging around until it's eventually garbage collected.&lt;/p&gt;

&lt;p&gt;Actually no! The Lambda runtime manages this case for you. If any errors occur in the initialisation code outside your handler, the function container is terminated and a new one is started up in a fresh state. If the transient issue has passed, your &lt;code&gt;init&lt;/code&gt; function will resolve successfully. 😃&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks to Tow Kowalski, Jeremy Daly and particularly Michael Hart whose suggestions in this &lt;a href="https://twitter.com/hichaelmart/status/1226968123863117824"&gt;Twitter thread&lt;/a&gt; prompted this tip.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;💌 &lt;em&gt;If you enjoyed this article, you can &lt;a href="https://winterwindsoftware.com/newsletter/"&gt;sign up to my newsletter&lt;/a&gt;. I send emails every weekday where I share my guides and deep dives on building serverless solutions on AWS with hundreds of developers and architects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/fff-function-initialisation/"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>node</category>
    </item>
    <item>
      <title>Give developers their own AWS account</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 07 Feb 2020 09:31:19 +0000</pubDate>
      <link>https://dev.to/paulswail/give-developers-their-own-aws-account-21f</link>
      <guid>https://dev.to/paulswail/give-developers-their-own-aws-account-21f</guid>
      <description>&lt;p&gt;If you’re in charge of a team of developers building a serverless application and your number one goal is to have them &lt;strong&gt;deliver quality software to users as fast as possible&lt;/strong&gt;, then you should do whatever’s in your power to get them their own individual AWS account.&lt;/p&gt;

&lt;p&gt;I’ve discussed approaches for &lt;a href="https://winterwindsoftware.com/managing-separate-projects-in-aws/"&gt;managing &lt;em&gt;shared&lt;/em&gt; accounts or projects&lt;/a&gt; in the past, but in this post I want to talk about sandboxed AWS accounts that are paid for by the company but are for use only by an individual developer. Here’s why I think they are a good idea…&lt;/p&gt;

&lt;h2&gt;
  
  
  Fully local development workflows are suboptimal or even impossible in serverless stacks
&lt;/h2&gt;

&lt;p&gt;In traditional server-based development projects, developers would typically run the full stack on their local development machine. Once a feature is ready and merged into the main branch, it would be deployed (either via a CI/CD process or manually by an engineer) to a shared environment for further testing.&lt;/p&gt;

&lt;p&gt;In serverless stacks however,  while local emulators do exist for some cloud-native services (e.g. &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html"&gt;DynamoDB Local&lt;/a&gt;), I almost always want to use the real cloud services for a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s faster and less error prone to consistently setup across the team (baseline cloud environments are more homogeneous than individual developer machines)&lt;/li&gt;
&lt;li&gt;It reduces integration bugs that “worked on my machine” (e.g. often around IAM permissions or infra config)&lt;/li&gt;
&lt;li&gt;I only need to configure a resource once (e.g. using CloudFormation YAML) rather than first manually creating and configuring a local emulator at dev time and having to separately configure the cloud equivalent once my feature is ready to integrate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Given that Lambda functions are probably the resource developers will be iterating upon most frequently, we don’t want one developer deploying changes over the top of another one. You could introduce a naming convention to prevent such collisions, but that adds unnecessary complexity to your configuration management.&lt;/p&gt;

&lt;h2&gt;
  
  
  You give developers more exposure to infrastructure management
&lt;/h2&gt;

&lt;p&gt;One of the &lt;a href="https://winterwindsoftware.com/concerns-that-serverless-takes-away/"&gt;big benefits of serverless architectures&lt;/a&gt; is that in many (most?) cases you should no longer need an engineer 100% dedicated to managing and operating the infrastructure of your system. But a corollary of this is that your developers will need to improve their DevOps skills, in particular around Infrastructure-as-Code and configuration management .  By entrusting each developer with their own cloud account, you automatically expose them to these concepts at an early stage from within the safety of their own sandbox.&lt;/p&gt;

&lt;p&gt;The good news is that this learning curve should be quite shallow as serverless cloud services are generally much simpler to configure than server-based systems that involve EC2 instances and VPCs.&lt;/p&gt;

&lt;h2&gt;
  
  
  You give less experienced developers more confidence to experiment
&lt;/h2&gt;

&lt;p&gt;Often a large part of the development process involves experimentation and trial and error. If a developer is new to software development in general or just new to serverless, then this is even more the case.&lt;/p&gt;

&lt;p&gt;I was working on a project for a client recently where the CTO wanted to jump in and fix a bug in an API to help out his team who were busy working on other projects. He is a highly experienced developer and architect but was new to serverless. At the time, he hadn’t his own personal AWS development account set up. He was easily able to identify and make the necessary changes to the codebase to fix the bug but couldn’t deploy and test his changes. There was a shared DEV account available but this was being used as the backend for a mobile app that was due to be demoed to a client the following day and so was nervous about breaking it. Basically he needed a sandbox to safely experiment in before being confident enough to deploy to this shared account.&lt;/p&gt;

&lt;p&gt;A much more egregious example I witnessed was a large enterprise who used a single AWS account for a whole department of developers who were responsible for delivery of a range of products. This one account contained a mishmash of resources for different projects/products, personal resources for individual developers and resources for all shared environments (including production 😱). Being productive as a developer in those conditions was a real challenge, never mind all the security concerns!&lt;/p&gt;

&lt;h2&gt;
  
  
  Common objections
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Isn’t this a big admin overhead to have to set this up?
&lt;/h3&gt;

&lt;p&gt;There is a small overhead but this only needs to be done once whenever a new developer joins your team. Each developer can re-use the same personal AWS account for different projects. You can use &lt;a href="https://aws.amazon.com/organizations/"&gt;AWS Organizations&lt;/a&gt; to provision the account from a master account without needing to separately enter credit card details, etc. If you are doing this very frequently, you can create a script using the AWS CLI to automate the entire account provisioning process.&lt;/p&gt;

&lt;h3&gt;
  
  
  How will we control costs?
&lt;/h3&gt;

&lt;p&gt;I have 2 recommendations on this point:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give developers read-only access to the Billing Dashboard for their own account (this isn’t enabled by default). Not only does this treats them like responsible adults who can manage their own money (!) but also encourages them to be curious about the costs of the cloud services they are consuming.&lt;/li&gt;
&lt;li&gt;Developers aren’t always responsible adults! Be sure to set up a &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html"&gt;billing alert&lt;/a&gt; with a sensible threshold so a senior person can be notified if a rogue developer starts mining bitcoin (or more likely, accidentally triggers an infinitely recursive loop of Lambda invocations).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another mitigation to the cost objection (that’s specific to fully serverless systems) is that serverless cloud resources have pay-per-use pricing. So you won’t be getting billed for an EC2 instance that a developer who infrequently helps out your team forgot to turn off. And you’ll often find that each developer’s usage of a service won’t exceed the free tier quota, so will be costing you next to nothing.&lt;/p&gt;

&lt;h3&gt;
  
  
  What about security?
&lt;/h3&gt;

&lt;p&gt;Developer accounts will be completely isolated from other AWS accounts within your organisation so cannot interfere with important resources. If you’re concerned about the actions developers take within their personal account, you can use AWS Organizations to set a &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html"&gt;Service Control Policy&lt;/a&gt; on all child accounts to limit the specific AWS services they have access to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our IT department simply won’t allow this
&lt;/h3&gt;

&lt;p&gt;Big enterprise red tape can be a real PITA for dev teams looking to ship quality software fast to their users. But it is what it is and if this is you, you have my sympathies. I don’t have a great recommendation for you here other than to use whatever influence you do have to lobby whoever is making this decision and educate them on the overall benefits of building serverless apps for your organisation as a whole.&lt;/p&gt;

&lt;p&gt;In your place of work, how are AWS accounts allocated? Do you have your own personal one? Let me know in the comments...&lt;/p&gt;

&lt;p&gt;💌 &lt;em&gt;If you enjoyed this article, you can &lt;a href="https://winterwindsoftware.com/newsletter/"&gt;sign up to my newsletter&lt;/a&gt;. I send emails a few times a week where I share my guides and deep dives on building serverless solutions on AWS with hundreds of developers and architects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/give-developers-own-aws-account/"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to access VPC and internet resources from Lambda without paying for a NAT Gateway</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 29 Nov 2019 12:23:56 +0000</pubDate>
      <link>https://dev.to/paulswail/how-to-access-vpc-and-internet-resources-from-lambda-without-paying-for-a-nat-gateway-196o</link>
      <guid>https://dev.to/paulswail/how-to-access-vpc-and-internet-resources-from-lambda-without-paying-for-a-nat-gateway-196o</guid>
      <description>&lt;p&gt;AWS recommend you don't connect Lambda functions to a VPC unless absolutely necessary. This is solid advice because doing so brings several limitations that a standard function doesn’t suffer from.&lt;/p&gt;

&lt;p&gt;One of these limitations is that your Lambda function can no longer access the internet. What many people don’t realise is that communicating with other AWS resources inside your account from your function also requires internet access (think S3, SNS, SQS, etc). This makes many common VPC use cases tricky to implement with Lambda.&lt;/p&gt;

&lt;p&gt;Let’s take the example of a scheduled Lambda function that runs a daily report by performing a SQL query against an RDS database. Based on this query result, it then adds messages to an SQS queue for processing. VPC access is required to access the RDS database and internet access is required to post to SQS. How can you do both?&lt;/p&gt;

&lt;p&gt;The typically recommended solution is to &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/" rel="noopener noreferrer"&gt;set up a NAT Gateway&lt;/a&gt; which allows your VPC-enabled Lambda running in private subnets to connect to a public subnet that has an internet gateway set up.&lt;/p&gt;

&lt;p&gt;Eugh! 😩 The last thing I want to be doing is network configuration. Never mind the extra billing cost that I’ll incur since NAT Gateways are billed both &lt;a href="https://aws.amazon.com/vpc/pricing/" rel="noopener noreferrer"&gt;by the hour and per GB of data processed&lt;/a&gt;. Pretty far from the serverless way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using a VPC proxy Lambda function
&lt;/h2&gt;

&lt;p&gt;An alternative solution I’ve used when faced with this problem is to create what I call a “VPC proxy Lambda function”. Instead of having one Lambda function that does all your work, you have two. The first Lambda function, let’s call it &lt;code&gt;runDailyReport&lt;/code&gt; (per our earlier example), is the entrypoint. It's triggered by an event (e.g. a CloudWatch schedule rule) and it's NOT configured to run inside the VPC. Its job is to orchestrate all I/O calls that need to be performed.&lt;/p&gt;

&lt;p&gt;The second Lambda function is our VPC proxy, let’s call it &lt;code&gt;dbGetReportResults&lt;/code&gt;. It’s configured to run inside the VPC and its sole responsibility is to connect to the RDS cluster, perform a query and return the result. It has no triggers configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fvpc-proxy-lambda-function.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fvpc-proxy-lambda-function.png" alt="VPC Proxy Lambda Function Pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key thing here is that the &lt;code&gt;runDailyReport&lt;/code&gt; function uses the AWS Lambda API to synchronously &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html#invoke-property" rel="noopener noreferrer"&gt;invoke&lt;/a&gt; the &lt;code&gt;dbGetReportResults&lt;/code&gt; function, with &lt;code&gt;InvocationType: "RequestResponse"&lt;/code&gt;. This means that it will wait for the response to be returned before proceeding. Once it gets the result back, it can then parse it and post jobs to SQS.&lt;/p&gt;

&lt;p&gt;Despite executing inside the VPC, the &lt;code&gt;dbGetReportResults&lt;/code&gt; function is still accessible to functions outside the VPC because the calling function (&lt;code&gt;runDailyReport&lt;/code&gt;) interacts with the AWS Lambda service API, which, like all the other AWS service APIs, is internet facing. It does not need to connect directly to the underlying container inside the VPC where the target function is executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;There are a few limitations to be aware of with this approach.&lt;/p&gt;

&lt;p&gt;Firstly, since you’re now using 2 Lambdas instead of 1, you will be paying for the execution time of both. While your VPC proxy function is executing and waiting on the database query to return, your entrypoint function will also still be executing (albeit it will be idle while waiting for the proxy function to return). This is why you might hear people saying that one Lambda function directly invoking another Lambda synchronously is an anti-pattern. I’d usually agree with that, but this use case is a valid exception IMO.&lt;/p&gt;

&lt;p&gt;Another minor limitation is that there will be a small additional latency as you need to account for the delay in invoking 2 functions in series (cold or warm start) instead of 1. This should not be an issue if your use case is not user facing.&lt;/p&gt;

&lt;p&gt;💌 &lt;strong&gt;&lt;em&gt;If you enjoyed this article, you can sign up &lt;a href="https://winterwindsoftware.com/newsletter/" rel="noopener noreferrer"&gt;to my weekly newsletter on building serverless apps in AWS&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/lambda-vpc-internet-access-no-nat-gateway/" rel="noopener noreferrer"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Comparing multi and single table approaches to designing a DynamoDB data model</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 22 Nov 2019 09:03:25 +0000</pubDate>
      <link>https://dev.to/paulswail/comparing-multi-and-single-table-approaches-to-designing-a-dynamodb-data-model-16dj</link>
      <guid>https://dev.to/paulswail/comparing-multi-and-single-table-approaches-to-designing-a-dynamodb-data-model-16dj</guid>
      <description>&lt;p&gt;DynamoDB is the predominant general purpose database in the AWS serverless ecosystem. Its low operational overhead, simple provisioning and configuration, streaming capability, pay-per-usage pricing and promise of near-infinite scaling make it a popular choice amongst developers building apps using Lambda and API Gateway as opposed to taking the more traditional RDBMS route.&lt;/p&gt;

&lt;p&gt;When it comes to designing your data model in DynamoDB, there are two distinct design approaches you can take: &lt;strong&gt;multi-table&lt;/strong&gt; or&lt;br&gt;
 &lt;strong&gt;single-table&lt;/strong&gt;. In this article, I will explore how both design approaches can impact the Total Cost of Ownership of your application over the lifecycle of its delivery&lt;br&gt;
and hopefully help you decide which approach is right for your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the key differences between each approach?
&lt;/h2&gt;

&lt;p&gt;Let’s start with an overview of what each involves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Multi-table&lt;/strong&gt; — One table per each type of entity. Each item (row) maps to a single instance of that entity and attributes (columns) are consistent across every item. This is the way most people are used to thinking about data models and, in my anecdotal experience, the most common approach used.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single-table&lt;/strong&gt; — One table serves the entire application or service and holds multiple types of entities within it. Each item has different attributes set on it depending on its entity type. I find this approach to be less common (at least in terms of articles and code examples on the internet) and is definitely a harder concept to grasp for most newcomers to DynamoDB. But crucially, this approach is the one that the AWS DynamoDB team espouses (somewhat without qualification) in their &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-general-nosql-design.html" rel="noopener noreferrer"&gt;official docs&lt;/a&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;You should maintain as few tables as possible in a DynamoDB application. Most well designed applications require only &lt;em&gt;one&lt;/em&gt; table.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The primary benefits of single-table design are faster read and write performance at scale and lower cloud bill costs. At the core of its design pattern is the concept of “index overloading”. This means that a single index (both Global Secondary and Local Secondary) on your one table can be used to support several different query patterns.&lt;br&gt;
This enables SQL-like JOIN queries to be performed, whereby multiple related entities are fetched in a single round trip to the database. This pattern is not possible in a one entity per table model.&lt;br&gt;
Secondly, since indexes are multi-purpose, less indexes are needed in total. This means there are fewer indexes to update whenever a write is performed, resulting in both faster writes and a lower billing cost.&lt;/p&gt;

&lt;p&gt;I appreciate this has been a very brief introduction to single-table design, so if you’re totally new to it and are still wondering  “how can you squeeze different entities into the same database table?”, please check out the links in the resources section below.&lt;/p&gt;

&lt;h2&gt;
  
  
  My experience with both approaches
&lt;/h2&gt;

&lt;p&gt;Up until mid-2019, I had only ever used a multi-table approach to data modelling in DynamoDB and more generally in NoSQL databases as a whole (I previously used MongoDB regularly). Since then, I’ve worked on several greenfield projects that use a single-table data model to underpin transaction-oriented apps.&lt;/p&gt;

&lt;p&gt;In the remaining sections, I’ll walk through each phase involved in a typical project delivery as it relates to your application’s database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Big design up front
&lt;/h2&gt;

&lt;p&gt;Before any database tables are provisioned or a single line of code is written, the first step is to design your data model. The official DynamoDB docs state the following general guideline for any type of NoSQL design:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;… you shouldn’t start designing your schema until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This second sentence struck me when I first read it. I work almost exclusively on agile-delivered projects where changes related to client feedback are the norm. Does this then rule out DynamoDB (and NoSQL in general) for me altogether on these projects?&lt;br&gt;
The short answer to this is “no” and there are strategies for managing changes (which I’ll get to later), but there’s no getting away from the fact that there is more &lt;a href="https://en.wikipedia.org/wiki/Big_Design_Up_Front" rel="noopener noreferrer"&gt;Big Design Up Front&lt;/a&gt; with DynamoDB versus using a SQL database. But the “serverlessness” benefits of DynamoDB over an RDBMS that I described in my opening paragraph above outweigh the impact of this upfront design effort IMHO.&lt;/p&gt;

&lt;p&gt;In terms of tools, I use a spreadsheet to define my design and have seen many DynamoDB experts doing the same. AWS have recently released a new tool &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.html" rel="noopener noreferrer"&gt;DynamoDB NoSQL Workbench&lt;/a&gt; that as of this writing is in early preview, but will hopefully provide a bit more structure to the data modelling design process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The design process
&lt;/h2&gt;

&lt;p&gt;So what is the process for creating your data model? Jeremy Daly has a great &lt;a href="https://www.jeremydaly.com/how-to-switch-from-rdbms-to-dynamodb-in-20-easy-steps/" rel="noopener noreferrer"&gt;list of 20 steps&lt;/a&gt; for designing a DynamoDB model using a single-table approach that I recommend you check out as it’s a quick read.&lt;br&gt;
Steps 11–14 in particular should give you a flavour of the level of rigour required:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fddb-design-steps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fddb-design-steps.png" alt="DynamoDB single-table design steps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deciding on the composition of your index fields is core to the whole design process and will involve many iterations. You need to consider the entirety of your access patterns across all entities in order to come up with your final design.&lt;/p&gt;

&lt;p&gt;The main schema difference you will see between single and multi-table models is that single-table will have generically named attributes that are used to form the table’s partition and sort key. This is required because different entity types will likely have differently named primary key fields. A common convention is to use attributes named &lt;code&gt;pk&lt;/code&gt; and &lt;code&gt;sk&lt;/code&gt; that map to the table’s partition and sort keys respectively. Similar generically named attributes may be used for composite keys that make up GSIs or LSIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning and configuration management
&lt;/h2&gt;

&lt;p&gt;So now we have our data models designed, it’s now time to provision our tables. This is probably the easiest step of the whole development process.&lt;br&gt;
DynamoDB has good CloudFormation support which makes Infrastructure-as-Code a breeze. With a few lines of YAML and a CLI deploy command you can quickly provision your DynamoDB tables and indexes along with associated IAM access control privileges in less than a minute. I use the &lt;a href="https://serverless.com/framework" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; which allows raw CloudFormation to be embedded in the &lt;code&gt;resources&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fdynamodb-cloudformation-yml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fdynamodb-cloudformation-yml.png" alt="DynamoDB CloudFormation config"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This area is one where the single-table approach wins out in terms of less configuration to manage and faster provisioning — I only need to define one table and pass its name into my Lambda functions as an environment variable. In the multi-table approach, I have config and environment variables for each individual table.  A minor benefit in the entire scheme of things, but still nice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing data access in the codebase
&lt;/h2&gt;

&lt;p&gt;Your database is now deployed and it’s time to start talking to it from your application.&lt;br&gt;
Chances are you will have domain entity objects that you pass around in your code (e.g. in API request/response payloads or SNS / SQS messages). If you need to persist these entities to your database, you can use one of the higher-level AWS DynamoDB SDKs (such as the &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html" rel="noopener noreferrer"&gt;DocumentClient for Node.js&lt;/a&gt;) to do so. But there are a few key differences between multi and single table designs here…&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing objects to the database
&lt;/h3&gt;

&lt;p&gt;In a multi-table design, you can often just write your in-memory domain object directly to the database as-is without any mapping. Fields on your object will become attributes in your DynamoDB item. Occasionally you may need to create a concatenated composite field that’s used in an index in order to support a particular filtering or sorting requirement.&lt;br&gt;
In a single-table design however, there will always be some mapping you need to do at write-time. Specifically, you will need to add 2 new fields &lt;code&gt;pk&lt;/code&gt; and &lt;code&gt;sk&lt;/code&gt; to your domain object before persisting it to DynamoDB. If you’re using other generic composite index fields, then you’ll also need to do the same for each of them. The values of these fields need to match the formats defined in your data model spreadsheet. I find that I usually need to concatenate a static prefix (that uniquely identifies the entity type and prevents collisions) to one or more fields from my domain object that I need to filter or sort on.&lt;/p&gt;

&lt;p&gt;Partial item updates are more complex again. Given that in single-table design there is data duplication &lt;strong&gt;within each item&lt;/strong&gt;, if you are using the DynamoDB UpdateItem API to update a single field, you need to check whether that field is also used within a composite indexed field, and if so, also update the value of the composite field. I’ve forgot about this several times and it can be quite difficult to remedy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reading objects from the database
&lt;/h3&gt;

&lt;p&gt;When you fetch items back from DynamoDB (via GetItem or Query API calls), you will almost always want to strip off the composite indexed fields before, say, returning the entity to the client who is calling your API. Unfortunately, the DynamoDB API calls do not allow you to blacklist attributes that you don’t want to return. So instead, you either have to whitelist all the other fields that you do wish to return (by using a &lt;code&gt;ProjectionExpression&lt;/code&gt;) or you do the blacklisting in your application code after the query returns. I usually take the latter option as it’s less code to maintain, despite being slightly less performant (as more data is being returned than what I need).&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategies for controlling code complexity in a single-table design
&lt;/h3&gt;

&lt;p&gt;For both reads and writes, you will find yourself doing a lot of string concatenations using the same prefixes and separator characters. This can be quite error prone. For this reason, I recommend you keep all data-access code for each entity type within a single module/file so you can quickly reference how an entity was created when you are writing a function to query or update it.&lt;/p&gt;

&lt;p&gt;In serverless apps, I usually structure my code such that a Lambda handler would handoff to a model / service module which would then be responsible for doing data access as well as talking to any other downstream services (SNS, etc).  If your data access code becomes sufficiently complex (which it easily can once composite fields are introduced), there is a case for using the &lt;a href="https://martinfowler.com/eaaCatalog/repository.html" rel="noopener noreferrer"&gt;repository pattern&lt;/a&gt; whereby you create modules whose sole responsibility is to perform DynamoDB operations for a particular entity type.&lt;/p&gt;

&lt;p&gt;Another recommendation for increasing the maintainability of your data access code is to keep your data model design spreadsheet up-to-date and have it reviewed alongside the code as part of your pull request process. I’ve found it helpful to have an “Implementation Status” flag column or colour code in my design spreadsheet as part of each query pattern showing whether it’s been implemented yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema migrations
&lt;/h2&gt;

&lt;p&gt;This is the scenario the official AWS docs warned you about. Those business use cases that you fully understood at the project outset have changed!  You need to make changes to your existing access patterns — maybe change a sort order or filter on a different field.&lt;br&gt;
Broadly, the solution to this will involve either or both of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a new GSI/LSI index pointing at the new fields (optionally also dropping an existing index that’s no longer needed)&lt;/li&gt;
&lt;li&gt;Writing a migration script that scans a table and performs per-item updates, such as amending the value of a composite indexed field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the new indexes/composite indexed fields are in place, then the application code updates can be deployed. You may want to then run a cleanup script to remove the old composite fields / indexes.&lt;/p&gt;

&lt;p&gt;Such a migration script can be difficult to manage, especially in a single-table design which is highly dependent on composite indexed fields.&lt;br&gt;
Full table scan operations can take a long time to complete. So while it’s running, your database will be in an inconsistent state with some items patched and some not. You may need to write your script such that it can operate on smaller batches/partitions and ensure that it’s idempotent (e.g. by storing state somewhere to show what migrations have been applied or what items were already patched) . Also, I could not find any well-known tools that currently help with this in the same way as the likes of &lt;a href="https://edgeguides.rubyonrails.org/active_record_migrations.html" rel="noopener noreferrer"&gt;Rails ActiveRecord Migrations&lt;/a&gt; works with SQL databases.&lt;/p&gt;

&lt;p&gt;I haven’t yet hit this issue in a post go-live production environment so I haven’t explored in-depth what solutions are currently available to this. If you have a good strategy for managing schema migrations, then please let me know in the comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating with other data stores
&lt;/h2&gt;

&lt;p&gt;Another concern that affects single-table designs is that some managed services that have built-in integrations for exporting data out of DynamoDB (for analytics) expect each table to map to a single domain entity. Exporting a single table that contains entities of varying shapes just won’t work without some custom-built intermediate step to perform a transform. An example of this is the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB.html" rel="noopener noreferrer"&gt;DynamoDB to Redshift integration&lt;/a&gt; . This might be something you need to consider when choosing your design approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  When should you use either approach?
&lt;/h2&gt;

&lt;p&gt;We’ve covered the impact of multi-table vs single-table design approach on each stage of the delivery lifecycle from design right through to post-go-live change management.&lt;br&gt;
The big question that now remains is when should you choose one approach over the other?&lt;/p&gt;

&lt;p&gt;The overly simplistic AWS official line of “Most well designed applications require only one table” doesn’t do the nuance of this decision justice IMHO.&lt;br&gt;
It implies that if you don’t use the single table approach that your application is not well designed. But their definition of “well designed” only considers performance, scaling and billing costs and neglects the other considerations that go into the Total Cost of Ownership of an application.&lt;/p&gt;

&lt;p&gt;So for me, it comes down to answering this question — what do you want to optimise for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;time to market and flexibility of requirements; or:&lt;/li&gt;
&lt;li&gt;performance, scalability and efficient billing cost?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the core tenets of the serverless movement is that it allows developers to focus more on the business problem at hand and much less on the technical and operational concerns that they’ve had to spend time on in the past working in server-based architectures. Another core tenet is near unlimited scalability. Often both of these are in tandem, but in this debate they are pulling against each other.&lt;/p&gt;

&lt;p&gt;The multi-table approach is an easier on-ramp for developers coming from an RDBMS background (which is the majority of developers).&lt;br&gt;
Adding the single table design approach on top of that cranks up the steepness of the learning curve. Add in the rigidity enforced by designing overloaded indexes and the overhead if any migrations need to be performed, then I think it’s fair to say that for most teams, they would ship an app faster using the multi-table approach&lt;/p&gt;

&lt;p&gt;With a multi-table model, I would argue that your team will be less dependent on the presence of a resident “DynamoDB modelling expert” in order to implement or approve any changes to the application’s data access.&lt;br&gt;
I’m sure most of you have experienced part of an application architecture or codebase that you're afraid to touch because you don’t really understand it and seems a bit like magic.&lt;/p&gt;

&lt;p&gt;All that said, once you do get the hang of the single-table approach and learn new strategies for creating composite indexes to support new query patterns, it’s undoubtedly very powerful.&lt;br&gt;
Your code only needs to make one fast database round-trip to fetch a batch of related entities. And you get that warm fuzzy feeling of confidence that your app performance and billing costs are as optimised as they can be. (But remember the cost of your engineer’s time usually trumps the cost of your cloud service bill).&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more
&lt;/h2&gt;

&lt;p&gt;If you’d like to learn more about data modelling in DynamoDB, here’s a list of resources that have helped me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=HaEPXoXVf2k&amp;amp;feature=youtu.be" rel="noopener noreferrer"&gt;AWS re:Invent 2018: Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB (DAT401) - YouTube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.trek10.com/blog/dynamodb-single-table-relational-modeling/" rel="noopener noreferrer"&gt;From relational DB to single DynamoDB table: a step-by-step exploration — Forrest Brazeal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-adjacency-graphs.html#bp-adjacency-lists" rel="noopener noreferrer"&gt;Best Practices for Managing Many-to-Many Relationships - Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=KlhS7hSnFYs" rel="noopener noreferrer"&gt;Build with DynamoDB - S1 E3 – NoSQL Data Modeling with Amazon DynamoDB - YouTube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/aws/comments/aimmg7/how_many_people_are_doing_true_single_table/" rel="noopener noreferrer"&gt;Reddit discussion: how many people are doing true “single table” dynamo DB (vs multiple tables)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jeremydaly.com/how-to-switch-from-rdbms-to-dynamodb-in-20-easy-steps/" rel="noopener noreferrer"&gt;How to switch from RDBMS to DynamoDB in 20 easy steps — Jeremy Daly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://acloud.guru/series/serverlessconf-nyc-2019/view/dynamodb-best-practices?sc_channel=sm&amp;amp;sc_campaign=Serverless,DB_Blog&amp;amp;sc_publisher=TWITTER&amp;amp;sc_country=DynamoDB&amp;amp;sc_geo=GLOBAL&amp;amp;sc_outcome=awareness&amp;amp;trk=ddbalexdebriepresenting_111319_TWITTER&amp;amp;sc_category=Amazon%20DynamoDB&amp;amp;linkId=76993308" rel="noopener noreferrer"&gt;Using (and Ignoring) DynamoDB Best Practices with Serverless — Alex DeBrie&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dynamodbbook.com" rel="noopener noreferrer"&gt;DynamoDB Book— Model DynamoDB the Right Way — AlexDeBrie&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thanks to &lt;a href="https://twitter.com/darrengibney" rel="noopener noreferrer"&gt;Darren Gibney&lt;/a&gt; for providing review on this post.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;💌 &lt;strong&gt;&lt;em&gt;If you enjoyed this article, you can sign up &lt;a href="https://winterwindsoftware.com/newsletter/" rel="noopener noreferrer"&gt;to my weekly newsletter on building serverless apps in AWS&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/dynamodb-modelling-single-vs-multi-table/" rel="noopener noreferrer"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>How to build a serverless photo upload service with API Gateway</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Fri, 25 Oct 2019 15:17:42 +0000</pubDate>
      <link>https://dev.to/paulswail/how-to-build-a-serverless-photo-upload-service-with-api-gateway-40el</link>
      <guid>https://dev.to/paulswail/how-to-build-a-serverless-photo-upload-service-with-api-gateway-40el</guid>
      <description>&lt;p&gt;So you’re building a REST API and you need to add support for uploading files from a web or mobile app. You also need to add a reference to these uploaded files against entities in your database, along with metadata supplied by the client.&lt;/p&gt;

&lt;p&gt;In this article, I'll show you how to do this using AWS API Gateway, Lambda and S3. We'll use the example of an event management web app where attendees can login and upload photos associated with a specific event along with a title and description. We will use S3 to store the photos and an API Gateway API to handle the upload request. The requirements are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User can login to the app and view a list of photos for a specific event, along with each photo's metadata (date, title, description, etc).&lt;/li&gt;
&lt;li&gt;User can only upload photos for the event if they are registered as having attended that event.&lt;/li&gt;
&lt;li&gt;Use Infrastructure-as-Code for all cloud resources to make it easy to roll this out to multiple environments. (No using the AWS Console for mutable operations here 🚫🤠)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Considering implementation options
&lt;/h2&gt;

&lt;p&gt;Having built similar functionality in the past using non-serverless technologies (e.g. in Express.js), my initial approach was to investigate how to use a Lambda-backed API Gateway endpoint that would handle everything: authentication, authorization, file upload and finally writing the S3 location and metadata to the database.&lt;br&gt;
While this approach is valid and achievable, it does have a few limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to write code inside your Lambda to manage the multipart file upload and the edge cases around this, whereas the existing S3 SDKs are already optimized for this.&lt;/li&gt;
&lt;li&gt;Lambda pricing is duration-based so for larger files your function will take longer to complete, costing you more.&lt;/li&gt;
&lt;li&gt;API Gateway has a &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html" rel="noopener noreferrer"&gt;payload size hard limit of 10MB&lt;/a&gt;. Contrast that to the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html" rel="noopener noreferrer"&gt;S3 file size limit of 5GB&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Using S3 presigned URLs for upload
&lt;/h2&gt;

&lt;p&gt;After further research, I found a better solution involving &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html" rel="noopener noreferrer"&gt;uploading objects to S3 using presigned URLs&lt;/a&gt; as a means of both providing a pre-upload authorization check and also pre-tagging the uploaded photo with structured metadata.&lt;/p&gt;

&lt;p&gt;The diagram below shows the request flow from a web app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fapigateway-photo-uploader.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fapigateway-photo-uploader.png" alt="API Gateway Photo Uploader API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main thing to notice is that from the web client’s point of view, it’s a 2-step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initiate the upload request, sending metadata related to the photo (e.g. eventId, title, description, etc). The API then does an auth check, executes business logic (e.g. restricting access only to users who have attended the event) and finally generates and responds with a secure presigned URL.&lt;/li&gt;
&lt;li&gt;Upload the file itself using the presigned URL.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’m using Cognito as my user store here but you could easily swap this out for a custom &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html" rel="noopener noreferrer"&gt;Lambda Authorizer&lt;/a&gt; if your API uses a different auth mechanism.&lt;/p&gt;

&lt;p&gt;Let's dive in...&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Create the S3 bucket
&lt;/h2&gt;

&lt;p&gt;I use the &lt;a href="https://serverless.com/framework" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; to manage configuration and deployment of all my cloud resources. For this app, I use 2 separate "services" (or stacks), that can be independently deployed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;infra&lt;/code&gt; service: this contains the S3 bucket, CloudFront distribution, DynamoDB table and Cognito User Pool resources.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;photos-api&lt;/code&gt; service: this contains the API Gateway and Lambda functions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can view the full configuration of each stack in the &lt;a href="https://github.com/WinterWindSoftware/sls-photos-upload-service" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt;, but we’ll cover the key points below.&lt;/p&gt;

&lt;p&gt;The S3 bucket is defined as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;PhotosBucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::S3::Bucket&lt;/span&gt;
        &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;BucketName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s"&gt;‘${self:custom.photosBucketName}’&lt;/span&gt;
            &lt;span class="na"&gt;AccessControl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Private&lt;/span&gt;
            &lt;span class="na"&gt;CorsConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;CorsRules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt;   &lt;span class="na"&gt;AllowedHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;‘*’&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
                    &lt;span class="na"&gt;AllowedMethods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;‘PUT’&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
                    &lt;span class="na"&gt;AllowedOrigins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;‘*’&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CORS configuration is important here as without it your web client won’t be able to perform the PUT request after acquiring the signed URL.&lt;br&gt;
I’m also using CloudFront as the CDN in order to minimize latency for users downloading the photos. You can view the config for the CloudFront distribution &lt;a href="https://github.com/WinterWindSoftware/sls-photos-upload-service/blob/master/services/infra/resources/s3-cloudfront-resources.yml#L34" rel="noopener noreferrer"&gt;here&lt;/a&gt;. However, this is an optional component and if you’d rather clients read photos directly from S3 then you can change the &lt;code&gt;AccessControl&lt;/code&gt; property above to be &lt;code&gt;PublicRead&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Create "Initiate Upload" API Gateway endpoint
&lt;/h2&gt;

&lt;p&gt;Our next step is to add a new API path that the client endpoint can call to request the signed URL. Requests to this will look like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /events/{eventId}/photos/initiate-upload
{
    "title": "Keynote Speech",
    "description": "Steve walking out on stage",
    "contentType": "image/png"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Responses will contain an object with a single &lt;code&gt;s3PutObjectUrl&lt;/code&gt; field that the client can use to upload to S3. This URL looks like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://s3.eu-west-1.amazonaws.com/eventsapp-photos-dev.sampleapps.winterwindsoftware.com/uploads/event_1234/1d80868b-b05b-4ac7-ae52-bdb2dfb9b637.png?AWSAccessKeyId=XXXXXXXXXXXXXXX&amp;amp;Cache-Control=max-age%3D31557600&amp;amp;Content-Type=image%2Fpng&amp;amp;Expires=1571396945&amp;amp;Signature=F5eRZQOgJyxSdsAS9ukeMoFGPEA%3D&amp;amp;x-amz-meta-contenttype=image%2Fpng&amp;amp;x-amz-meta-description=Steve%20walking%20out%20on%20stage&amp;amp;x-amz-meta-eventid=1234&amp;amp;x-amz-meta-photoid=1d80868b-b05b-4ac7-ae52-bdb2dfb9b637&amp;amp;x-amz-meta-title=Keynote%20Speech&amp;amp;x-amz-security-token=XXXXXXXXXX&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Notice in particular these fields embedded in the query string:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;x-amz-meta-XXX&lt;/code&gt; — These fields contain the metadata values that our &lt;code&gt;initiateUpload&lt;/code&gt; Lambda function will set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;x-amz-security-token&lt;/code&gt; — this contains the temporary security token used for authenticating with S3&lt;/li&gt;
&lt;li&gt; &lt;code&gt;Signature&lt;/code&gt; — this ensures that the PUT request cannot be altered by the client (e.g. by changing metadata values)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following extract from &lt;code&gt;serverless.yml&lt;/code&gt; shows the function configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# serverless.yml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eventsapp-photos-api&lt;/span&gt;
&lt;span class="s"&gt;…&lt;/span&gt;
&lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;appName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eventsapp&lt;/span&gt;
    &lt;span class="na"&gt;infraStack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.appName}-infra-${self:provider.stage}&lt;/span&gt;
    &lt;span class="na"&gt;awsAccountId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${cf:${self:custom.infraStack}.AWSAccountId}&lt;/span&gt;
    &lt;span class="na"&gt;apiAuthorizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:cognito-idp:${self:provider.region}:${self:custom.awsAccountId}:userpool/${cf:${self:custom.infraStack}.UserPoolId}&lt;/span&gt;
    &lt;span class="na"&gt;corsConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;…&lt;/span&gt;
    &lt;span class="s"&gt;httpInitiateUpload&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;src/http/initiate-upload.handler&lt;/span&gt;
        &lt;span class="na"&gt;iamRoleStatements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt;   &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;s3:PutObject&lt;/span&gt;
            &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:s3:::${cf:${self:custom.infraStack}.PhotosBucket}*&lt;/span&gt;
        &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;events/{eventId}/photos/initiate-upload&lt;/span&gt;
            &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;post&lt;/span&gt;
            &lt;span class="na"&gt;authorizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.apiAuthorizer}&lt;/span&gt;
            &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.corsConfig}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things to note here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;httpInitiateUpload&lt;/code&gt; Lambda function will handle POST requests to the specified path.&lt;/li&gt;
&lt;li&gt;The Cognito user pool (output from the &lt;code&gt;infra&lt;/code&gt; stack) is referenced in the function’s &lt;code&gt;authorizer&lt;/code&gt; property. This makes sure requests without a valid token in the &lt;code&gt;Authorization&lt;/code&gt; HTTP header are rejected by API Gateway.&lt;/li&gt;
&lt;li&gt;CORS is enabled for all API endpoints&lt;/li&gt;
&lt;li&gt;Finally, the &lt;code&gt;iamRoleStatements&lt;/code&gt; property creates an IAM role that this function will run as. This role allows &lt;code&gt;PutObject&lt;/code&gt; actions against the S3 photos bucket. It is especially important that this permission set follows the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege" rel="noopener noreferrer"&gt;least privilege principle&lt;/a&gt; as the signed URL returned to the client contains a temporary access token that allows the token holder to assume all the permissions of the IAM role that generated the signed URL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's look at the handler code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;S3&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-sdk/clients/s3&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uuid/v4&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;InitiateEventPhotoUploadResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;PhotoMetadata&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@common/schemas/photos-api&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isValidImageContentType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;getSupportedContentTypes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;getFileSuffixForContentType&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@svc-utils/image-mime-types&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;s3Config&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@svc-config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;wrap&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@common/middleware/apigw&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;StatusCodeError&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@common/utils/errors&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;wrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Read metadata from path/body and validate&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathParameters&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;photoMetadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhotoMetadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;isValidImageContentType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;photoMetadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StatusCodeError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Invalid contentType for image. Valid values are: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;getSupportedContentTypes&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// TODO: Add any further business logic validation here (e.g. that current user has write access to eventId)&lt;/span&gt;

  &lt;span class="c1"&gt;// Create the PutObjectRequest that will be embedded in the signed URL&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;photoId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PutObjectRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;photosBucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`uploads/event_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;photoId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;getFileSuffixForContentType&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;photoMetadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ContentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;photoMetadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;CacheControl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;max-age=31557600&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// instructs CloudFront to cache for 1 year&lt;/span&gt;
    &lt;span class="c1"&gt;// Set Metadata fields to be retrieved post-upload and stored in DynamoDB&lt;/span&gt;
    &lt;span class="na"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="p"&gt;...(&lt;/span&gt;&lt;span class="nx"&gt;photoMetadata&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nx"&gt;photoId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="c1"&gt;// Get the signed URL from S3 and return to client&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3PutObjectUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getSignedUrlPromise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;putObject&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InitiateEventPhotoUploadResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;photoId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;s3PutObjectUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;s3.getSignedUrlPromise&lt;/code&gt; is the main line of interest here. It serializes a PutObject request into a signed URL.&lt;/p&gt;

&lt;p&gt;I'm using a &lt;a href="https://github.com/WinterWindSoftware/sls-photos-upload-service/blob/master/services/common/middleware/apigw.ts" rel="noopener noreferrer"&gt;&lt;code&gt;wrap&lt;/code&gt;&lt;/a&gt; middleware function in order to handle cross-cutting API concerns such as adding CORS headers and uncaught error logging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Uploading file from the web app
&lt;/h2&gt;

&lt;p&gt;Now to implement the client logic. I've created a very basic (read: ugly) &lt;code&gt;create-react-app&lt;/code&gt; example (code &lt;a href="https://github.com/WinterWindSoftware/sls-photos-upload-service/tree/master/clients/events-web-app" rel="noopener noreferrer"&gt;here&lt;/a&gt;). I used &lt;a href="https://aws-amplify.github.io/docs/js/authentication" rel="noopener noreferrer"&gt;Amplify's Auth library&lt;/a&gt; to manage the Cognito authentication and then created a &lt;code&gt;PhotoUploader&lt;/code&gt; React component which makes use of the &lt;a href="https://github.com/react-dropzone/react-dropzone" rel="noopener noreferrer"&gt;React Dropzone&lt;/a&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// components/Photos/PhotoUploader.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useCallback&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useDropzone&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-dropzone&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;uploadPhoto&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../utils/photos-api-client&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PhotoUploader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FC&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;eventId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;onDrop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useCallback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;File&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;starting upload&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uploadResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;uploadPhoto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// should enhance this to read title and description from text input fields.&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my description&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;upload complete!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;uploadResult&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;uploadResult&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error uploading&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getRootProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;getInputProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isDragActive&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useDropzone&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;onDrop&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nf"&gt;getRootProps&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nf"&gt;getInputProps&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;isDragActive&lt;/span&gt;
          &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Drop the files here ...&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Drag and drop some files here, or click to select files&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;PhotoUploader&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// utils/photos-api-client.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;API&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-amplify&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AxiosResponse&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PhotoMetadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;InitiateEventPhotoUploadResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;EventPhoto&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../../../services/common/schemas/photos-api&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;API&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;API&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;API_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PhotosAPI&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getHeaders&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Set auth token headers to be passed in all API requests&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;currentSession&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Authorization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getIdToken&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;getJwtToken&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getPhotos&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;EventPhoto&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;API&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;API_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`/events/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/photos`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getHeaders&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;uploadPhoto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;photoFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhotoMetadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;AxiosResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;initiateResult&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InitiateEventPhotoUploadResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;API&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;API_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`/events/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/photos/initiate-upload`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getHeaders&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;initiateResult&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;s3PutObjectUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;photoFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;uploadPhoto&lt;/code&gt; function in the &lt;code&gt;photos-api-client.ts&lt;/code&gt; file is the key here. It performs the 2-step process we mentioned earlier by first calling our &lt;code&gt;initiate-upload&lt;/code&gt; API Gateway endpoint and then making a PUT request to the &lt;code&gt;s3PutObjectUrl&lt;/code&gt; it returned. Make sure that you set the &lt;code&gt;Content-Type&lt;/code&gt; header in your S3 put request, otherwise it will be rejected as not matching the signature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Pushing photo data into database
&lt;/h2&gt;

&lt;p&gt;Now that the photo has been uploaded, the web app will need a way of listing all photos uploaded for an event (using the &lt;code&gt;getPhotos&lt;/code&gt; function above).&lt;/p&gt;

&lt;p&gt;To close this loop and make this query possible, we need to record the photo data in our database. We do this by creating a second Lambda function &lt;code&gt;processUploadedPhoto&lt;/code&gt; that is triggered whenever a new object is added to our S3 bucket.&lt;/p&gt;

&lt;p&gt;Let's look at its config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# serverless.yml&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eventsapp-photos-api&lt;/span&gt;
&lt;span class="s"&gt;…&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;…&lt;/span&gt;
    &lt;span class="s"&gt;s3ProcessUploadedPhoto&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;src/s3/process-uploaded-photo.handler&lt;/span&gt;
        &lt;span class="na"&gt;iamRoleStatements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt;   &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
                &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dynamodb:Query&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dynamodb:Scan&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dynamodb:GetItem&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dynamodb:PutItem&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dynamodb:UpdateItem&lt;/span&gt;
                &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:dynamodb:${self:provider.region}:${self:custom.awsAccountId}:table/${cf:${self:custom.infraStack}.DynamoDBTablePrefix}*&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt;   &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
                &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;s3:GetObject&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;s3:HeadObject&lt;/span&gt;
                &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:s3:::${cf:${self:custom.infraStack}.PhotosBucket}*&lt;/span&gt;
        &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${cf:${self:custom.infraStack}.PhotosBucket}&lt;/span&gt;
                &lt;span class="na"&gt;event&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3:ObjectCreated:*&lt;/span&gt;
                &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uploads/&lt;/span&gt;
                &lt;span class="na"&gt;existing&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's triggered off the &lt;code&gt;s3:ObjectCreated&lt;/code&gt; event and will only fire for files added beneath the &lt;code&gt;uploads/&lt;/code&gt; top-level folder.&lt;br&gt;
In the &lt;code&gt;iamRoleStatements&lt;/code&gt; section, we are allowing the function to write to our DynamoDB table and read from the S3 Bucket.&lt;/p&gt;

&lt;p&gt;Now let's look at the function code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;S3Event&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-lambda&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;S3&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-sdk/clients/s3&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@common/utils/log&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;EventPhotoCreate&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@common/schemas/photos-api&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;cloudfront&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@svc-config&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;savePhoto&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@svc-models/event-photos&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3Event&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// First fetch metadata from S3&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;headObject&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;promise&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Shouldn't get here&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;errorMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Cannot process photo as no metadata is set for it&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;// S3 metadata field names are converted to lowercase, so need to map them out carefully&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;photoDetails&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;EventPhotoCreate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;photoid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contenttype&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// Map the S3 bucket key to a CloudFront URL to be stored in the DB&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`https://&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cloudfront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;photosDistributionDomainName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;s3Record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="c1"&gt;// Now write to DDB&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;savePhoto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;photoDetails&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The event object passed to the Lambda handler function only contains the bucket name and key of the object that triggered it. So in order to fetch the metadata, we need to use the &lt;code&gt;headObject&lt;/code&gt; S3 API call.&lt;br&gt;
Once we've extracted the required metadata fields, we then construct a CloudFront URL for the photo (using the CloudFront distribution's domain name passed in via an environment variable) and save to DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future enhancements
&lt;/h2&gt;

&lt;p&gt;A potential enhancement that could be made to the upload flow is to add in an image optimization step before saving it to the database. This would involve a having a Lambda function listen for &lt;code&gt;S3:ObjectCreated&lt;/code&gt; events beneath the &lt;code&gt;upload/&lt;/code&gt; key prefix which then reads the image file, resizes and optimizes it accordingly and then saves the new copy to the same bucket but under a new &lt;code&gt;optimized/&lt;/code&gt; key prefix. The config of our Lambda function that saves to the database should then be updated to be triggered off this new prefix instead.&lt;/p&gt;

&lt;p&gt;💌 &lt;strong&gt;&lt;em&gt;If you enjoyed this article, you can sign up &lt;a href="https://winterwindsoftware.com/newsletter/" rel="noopener noreferrer"&gt;to my weekly newsletter on building serverless apps in AWS&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/serverless-photo-upload-api/" rel="noopener noreferrer"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>lambda</category>
      <category>apigateway</category>
      <category>node</category>
    </item>
    <item>
      <title>Migrating authentication from Express.js to API Gateway using a Lambda Authorizer</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Wed, 17 Apr 2019 20:19:20 +0000</pubDate>
      <link>https://dev.to/paulswail/migrating-authentication-from-express-js-to-api-gateway-using-a-lambda-authorizer-450m</link>
      <guid>https://dev.to/paulswail/migrating-authentication-from-express-js-to-api-gateway-using-a-lambda-authorizer-450m</guid>
      <description>

&lt;p&gt;&lt;em&gt;This is part 6 in the series &lt;a href="https://winterwindsoftware.com/serverless-migration-journal/"&gt;Migrating a Monolithic SaaS App to Serverless — A Decision Journal&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before I can migrate any of the routes from my Express.js API to API Gateway + Lambda, I first need to implement an authentication and authorization mechanism such that the API Gateway endpoints respect the same auth logic as their legacy API counterparts.&lt;/p&gt;

&lt;p&gt;My constraints for this are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep the same back-end MongoDB user and session store that the legacy app is using as I want to avoid/minimise code changes to the legacy app. This rules out using dedicated auth services such as AWS Cognito or Auth0 which would be my first stops for auth in a greenfield app.&lt;/li&gt;
&lt;li&gt;Clients authenticate to the existing API by first obtaining a session token via a call to a login endpoint and then by providing this token in subsequent requests either in the Cookie or Authorization HTTP headers. This behaviour needs to be reproduced in my API Gateway implementation.&lt;/li&gt;
&lt;li&gt;The login endpoint itself (i.e. how the token is obtained in the first place) is out of scope for this task, and the legacy login endpoint will continue to be used for now.&lt;/li&gt;
&lt;li&gt;This will be an interim solution as my longer-term goal for this migration process is to replace MongoDB as my back-end data store.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using a Lambda Authorizer to authenticate API requests
&lt;/h2&gt;

&lt;p&gt;API Gateway allows you to define a &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html"&gt;Lambda Authorizer&lt;/a&gt; to execute custom authentication and authorization logic before allowing a client access to the actual API route they have requested. A Lambda Authorizer function is somewhat similar to a middleware in Express.js in that it gets called before the main route handler function, it can reject a request outright, or if it allows the request to proceed, it can enhance the request event with extra data that the main route handler can then reference (e.g. user and role information).&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication vs Authorization
&lt;/h2&gt;

&lt;p&gt;Before we dive into the implementation detail, I want to make clear the distinction between these related “auth” concepts as they are often conflated and the AWS naming of “Lambda Authorizer” does not help here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Authentication&lt;/em&gt; is the process of verifying who you are. When you log on to a computer or app with a username and password you are authenticating.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Authorization&lt;/em&gt; is the process of verifying that you have access to something. Gaining access to a resource because the permissions configured on it allow you access is authorization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://serverfault.com/questions/57077/what-is-the-difference-between-authentication-and-authorization#57082"&gt;(What is the difference between authentication and authorization? - Server Fault)&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are implementing a Lambda Authorizer, your function will always need to perform authentication (i.e. ensure you are who you say you are) but it does not necessarily need to perform authorization (i.e. check that you have permissions to access the resource you are requesting).&lt;/p&gt;

&lt;p&gt;In my case, I decided (for now) that my Lambda Authorizer would only perform authentication and that the authorization logic will reside in the route handler functions as the necessary permissions vary across different routes. As I start migrating more routes over to Lambda, I may then decide to move common authorization logic to the shared Lambda Authorizer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For an in-depth look at different strategies for using Lambda Authorizers, check out &lt;a href="https://www.alexdebrie.com/posts/lambda-custom-authorizers/"&gt;The Complete Guide to Custom Authorizers with AWS Lambda and API Gateway&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reverse engineering the Express authentication logic
&lt;/h2&gt;

&lt;p&gt;My legacy API uses the &lt;a href="http://www.passportjs.org/"&gt;Passport.js&lt;/a&gt; and &lt;a href="https://github.com/expressjs/session"&gt;express-session&lt;/a&gt; middlewares.&lt;br&gt;
I could potentially just import these modules into my Lambda Authorizer function. However, I decided against this for a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These modules were built specifically for use with Express so I would end up having to hack a way of invoking them in a non-standard way from a Lambda.&lt;/li&gt;
&lt;li&gt;I don’t want to add a raft of new dependencies to my Lambda and incur the extra coldstart overhead and increased security threat that this would bring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I decided to inspect the code on Github for these modules and extract the necessary logic into my Lambda function.  I’ll not share the full implementation code here, but it follows these steps to process a request:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetch token from HTTP request header (either the &lt;code&gt;Cookie&lt;/code&gt; or the &lt;code&gt;Authorization&lt;/code&gt; header).&lt;/li&gt;
&lt;li&gt;Use session secret to decrypt token and extract SessionID from it.&lt;/li&gt;
&lt;li&gt;Using SessionID, fetch session object from MongoDB and get user data stored inside it.&lt;/li&gt;
&lt;li&gt;Add user data to the request context.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Allowing and denying requests
&lt;/h2&gt;

&lt;p&gt;If a request is successfully authenticated, in order to tell API Gateway it can proceed with invoking the handler for the requested route, the Lambda Authorizer function needs to return a response which contains an IAM policy document that allows the caller invoke access to the handler.&lt;/p&gt;

&lt;p&gt;Here’s an example of a response the Lambda Authorizer function returns for an allowed request:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"principalId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my_user_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"policyDocument"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"execute-api:Invoke"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"userId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my_user_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"customerAccountId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my_customer_account_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"fullName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John Smith"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"roles"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"[]"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice here the &lt;code&gt;context&lt;/code&gt; object where I provide further information that is stored against the user record in MongoDB. API Gateway makes this context data available to the handler function (which we’ll cover below).&lt;/p&gt;

&lt;p&gt;That’s the happy path covered, but there are several reasons why a request could be rejected, e.g.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No token provided&lt;/li&gt;
&lt;li&gt;Invalid token provided&lt;/li&gt;
&lt;li&gt;Session expired&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each of these cases, I want to send back a HTTP 401 Unauthorized status code to the client, but it wasn’t immediately obvious from reading the AWS docs how I could do this.&lt;/p&gt;

&lt;p&gt;In normal API Gateway Lambda handlers, there is a &lt;code&gt;statusCode&lt;/code&gt; field in the response that you can set, but Lambda Authorizer responses don’t work that way.  The &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html"&gt;examples&lt;/a&gt; show throwing an error (or if you’re using legacy Node, passing an Error in the callback). However, when I tested this, API Gateway returned a 403 error. I couldn’t work out what was going on until I realised that the actual string in the error message needs to match one of API Gateway’s built-in message -&amp;gt; status code mappings.  I hadn’t realised this significance and had been using my own custom error strings and API Gateway didn’t know what to do with those so it just defaulted to returning a 403.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CustomAuthorizerEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;AuthResponse&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="s1"&gt;'aws-lambda'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="cm"&gt;/** Built-in error messages that API Gateway auto-maps to HTTP status codes */&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;enum&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayErrorMessage&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cm"&gt;/** 401 */&lt;/span&gt;
    &lt;span class="nx"&gt;Unauthorized&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Unauthorized'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="cm"&gt;/** 403 */&lt;/span&gt;
    &lt;span class="nx"&gt;AccessDenied&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Access Denied'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cm"&gt;/** Lambda Authorizer handler */&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CustomAuthorizerEvent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;AuthResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// No token provided&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;APIGatewayErrorMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Unauthorized&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// first check Authorization bearer header&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="s1"&gt;'bearer'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;authenticateToken&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;// Badly formed header&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;APIGatewayErrorMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Unauthorized&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// ... rest of auth logic&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Wiring up auth logic to a private endpoint
&lt;/h2&gt;

&lt;p&gt;So far I’ve covered the implementation of the Lambda Authorizer but not shown how you connect it to the endpoints that you want to protect. As I don’t yet have a real endpoint in my service, I created a test &lt;code&gt;private-endpoint&lt;/code&gt;. This endpoint simply returns the user context data passed onto it from the Lambda Authorizer back to authenticated clients.&lt;/p&gt;

&lt;p&gt;Here are the relevant parts of my &lt;code&gt;serverless.yml&lt;/code&gt; file:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vpcSettings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;securityGroupIds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;MLabSecurityGroup&lt;/span&gt;
      &lt;span class="na"&gt;subnetIds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${cf:vpc.SubnetAPrivate}&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${cf:vpc.SubnetBPrivate}&lt;/span&gt;
    &lt;span class="na"&gt;lambda_authorizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;authorizer&lt;/span&gt;
        &lt;span class="na"&gt;resultTtlInSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0&lt;/span&gt;
        &lt;span class="na"&gt;identitySource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;request&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Lambda Authorizer function&lt;/span&gt;
    &lt;span class="na"&gt;authorizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;src/functions/authorizer.handler&lt;/span&gt;
        &lt;span class="na"&gt;vpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.vpcSettings}&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;SESSION_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${ssm:/autochart/${self:provider.stage}/session-secret~true}&lt;/span&gt;
    &lt;span class="na"&gt;private-endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;src/functions/private-endpoint.handler&lt;/span&gt;
        &lt;span class="na"&gt;vpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.vpcSettings}&lt;/span&gt;
        &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.apiRoot}/private&lt;/span&gt;
            &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;get&lt;/span&gt;
            &lt;span class="na"&gt;authorizer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${self:custom.lambda_authorizer}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Firstly, you’ll notice that my functions need to be inside a VPC in order to access my MongoDB database. I also pass a &lt;code&gt;SESSION_SECRET&lt;/code&gt; environment variable (fetched from SSM Parameter Store) to my &lt;code&gt;authorizer&lt;/code&gt; function. This is the same session secret that the legacy API uses to sign session keys.&lt;br&gt;
The &lt;code&gt;http.authorizer&lt;/code&gt; attribute of the &lt;code&gt;private-endpoint&lt;/code&gt; function is where the connection is made between the endpoint handler and the authorizer function.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;private-endpoint&lt;/code&gt; handler function can then access the custom user data via the &lt;code&gt;event.requestContext.authorizer&lt;/code&gt; field:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/functions/private-endpoint.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayProxyEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayProxyResult&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="s1"&gt;'aws-lambda'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;wrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;APIGatewayProxyEvent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;APIGatewayProxyResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;authContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authorizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  To cache or not to cache
&lt;/h2&gt;

&lt;p&gt;API Gateway allows you to cache the responses of Lambda Authorizers for a period of time. This can be useful as it avoids the extra latency incurred on each request by calling an extra function and the roundtrip to MongoDB to fetch the session data.&lt;br&gt;
While this seems like it would be prudent, I decided against implementing this at this stage for a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The existing legacy API currently has no auth caching, so the roundtrip to MongoDB will not add additional latency.&lt;/li&gt;
&lt;li&gt;Caching could introduce strange behaviour and require complex invalidation logic across both new and legacy APIs (e.g. if user logs out).&lt;/li&gt;
&lt;li&gt;I couldn’t work out if my use case of allowing the auth token to be in EITHER the cookie OR the authorization header is supported. API Gateway allows you to specify zero or more “&lt;a href="https://docs.aws.amazon.com/apigatewayv2/latest/api-reference/apis-apiid-authorizers-authorizerid.html#apis-apiid-authorizers-authorizerid-prop-updateauthorizerinput-identitysource"&gt;Identity Sources&lt;/a&gt;” which stipulate the HTTP request parameters that are required in the auth logic. If this is specified, the parameter is used to form a cache key. However, from my testing it seemed that if you provide more than 1 source that API Gateway then ANDs each parameter, which has the effect of requiring that the client supply all the headers. This wouldn't work for my use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will review this decision to skip auth caching after I observe the real-world latency of my migrated endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Now that I have my auth logic in place, I can begin migrating the "Event Metrics" service from the legacy API. I'll be covering this in my next post.&lt;/p&gt;

&lt;p&gt;✉️ &lt;strong&gt;&lt;em&gt;If you enjoyed this article and would like to get future updates from me on migrating to serverless, you can subscribe to &lt;a href="https://winterwindsoftware.com/newsletter/"&gt;my weekly newsletter on building serverless apps in AWS&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;You also might enjoy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/concerns-that-serverless-takes-away/"&gt;Concerns that serverless takes away&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/serverless-definitions/"&gt;The differing definitions of “serverless”&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/serverless-glossary/"&gt;A Serverless Glossary&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/lambda-authorizer/"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;


</description>
      <category>serverless</category>
      <category>node</category>
      <category>lambda</category>
      <category>apigateway</category>
    </item>
    <item>
      <title>Building CICD pipelines for serverless microservices using the AWS CDK</title>
      <dc:creator>Paul Swail</dc:creator>
      <pubDate>Tue, 09 Apr 2019 12:15:00 +0000</pubDate>
      <link>https://dev.to/paulswail/building-cicd-pipelines-for-serverless-microservices-using-the-aws-cdk-1o7d</link>
      <guid>https://dev.to/paulswail/building-cicd-pipelines-for-serverless-microservices-using-the-aws-cdk-1o7d</guid>
      <description>&lt;p&gt;&lt;em&gt;This is part 5 in the series &lt;a href="https://winterwindsoftware.com/serverless-migration-journal/" rel="noopener noreferrer"&gt;Migrating a Monolithic SaaS App to Serverless — A Decision Journal&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In order to make sure that each new route I migrate over from the legacy Express.js API to Lambda is thoroughly tested before releasing to production, I have started to put a CICD process in place.&lt;/p&gt;

&lt;p&gt;My main goals for the CICD process are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each Serverless service will have its own pipeline as each will be deployed independently.&lt;/li&gt;
&lt;li&gt;Merges to master should trigger the main pipeline which will perform tests and deployments through dev to staging to production.&lt;/li&gt;
&lt;li&gt;The CICD pipelines will be hosted in the DEV AWS account but must be able to deploy to other AWS accounts.&lt;/li&gt;
&lt;li&gt;All CICD config will be done via infrastructure-as-code so I can easily setup pipelines for new services as I create them. Console access will be strictly read-only.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have decided to use AWS &lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;CodePipeline&lt;/a&gt; and &lt;a href="https://aws.amazon.com/codebuild/" rel="noopener noreferrer"&gt;CodeBuild&lt;/a&gt; to create the pipelines as they’re serverless and have IAM support built-in that I can use to securely deploy to prod servers. CodePipeline acts as the orchestrator whereas CodeBuild acts as the task runner. Each stage in the CodePipeline pipeline consists of one or more actions which call out to a CodeBuild project that contains the actual commands to create a release package, run tests, do deployment, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline overview
&lt;/h2&gt;

&lt;p&gt;For the first version of my CICD process, I will be deploying a single service &lt;a href="https://winterwindsoftware.com/serverless-migration-journal-part3/" rel="noopener noreferrer"&gt;&lt;code&gt;ac-rest-api&lt;/code&gt;&lt;/a&gt; (which currently only contains a single test Lambda function + API GW route). This pipeline will work as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GitHub source action triggers whenever code is pushed to the master branch of the repo hosting my &lt;code&gt;ac-rest-api&lt;/code&gt; service.&lt;/li&gt;
&lt;li&gt;Run lint + unit/integration tests, and build deployment package (using &lt;code&gt;sls package&lt;/code&gt; command)&lt;/li&gt;
&lt;li&gt;Deploy package (using &lt;code&gt;sls deploy&lt;/code&gt;) to the DEV stage and run acceptance tests against it.&lt;/li&gt;
&lt;li&gt;Deploy package to the STAGING stage and run acceptance tests against it.&lt;/li&gt;
&lt;li&gt;Deploy package to the PROD stage and run acceptance tests against it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what my completed pipeline looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fcodepipeline-stages.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwinterwindsoftware.com%2Fimg%2Fblog-images%2Fcodepipeline-stages.png" alt="CodePipeline stages"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing pipelines as infrastructure-as-code
&lt;/h2&gt;

&lt;p&gt;As with most AWS services, it’s easy to find tutorials on how to create a CodePipeline pipeline using the AWS Console, but difficult to find detailed infrastructure-as-code examples. CodeBuild fairs better in this regard as the docs generally seem to recommend using a &lt;code&gt;buildspec.yml&lt;/code&gt; file in your repo to specify the build commands.&lt;/p&gt;

&lt;p&gt;Since I will be creating several pipelines for &lt;a href="https://winterwindsoftware.com/serverless-migration-journal-part4/" rel="noopener noreferrer"&gt;each microservice I’ve identified&lt;/a&gt; as I go through the migration process, I want something I can easily reuse.&lt;/p&gt;

&lt;p&gt;I considered these options for defining my pipelines:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use raw CloudFormation YAML files.&lt;/li&gt;
&lt;li&gt;Use CloudFormation inside the resources section of serverless.yml.&lt;/li&gt;
&lt;li&gt;Use the new &lt;a href="https://github.com/awslabs/aws-cdk" rel="noopener noreferrer"&gt;AWS Cloud Development Kit (CDK)&lt;/a&gt;, which allows you to define high-level infrastructure constructs using popular languages (Javascript/Typescript, Java, C#) which generate CloudFormation stacks under-the-hood.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’m relatively proficient with CloudFormation but I find its dev experience very frustrating and in the past have burned days getting complex CloudFormation stacks up and running. My main gripes with it are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The slow feedback loop between authoring and seeing a deployment complete (fail or succeed)&lt;/li&gt;
&lt;li&gt;The lack of templating/modules which makes it difficult to re-use without resorting to copying and pasting a load of boilerplate (I’m aware of tools like Troposphere that help in this regard but I’m a Javascript developer and don’t want to have to learn a new language (Python))&lt;/li&gt;
&lt;li&gt;I always need to have the docs open in a browser tab to see the available properties on a resource&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Given this, I decided to proceed with using the CDK which (in theory) should help mitigate all 3 of these complaints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating my CDK app
&lt;/h2&gt;

&lt;p&gt;The CDK provides a CLI that you use to deploy an “app”. An app in this context is effectively a resource tree that you compose using an object-oriented programming model, where each node in the tree is called a “construct”. The CDK provides ready-made constructs for all the main AWS resources and also allows (and &lt;a href="https://docs.aws.amazon.com/CDK/latest/userguide/writing_constructs.html" rel="noopener noreferrer"&gt;encourages&lt;/a&gt;) you to write your own. A CDK app contains at least one CloudFormation stack construct, which it uses to instigate the deployment under the hood. So you still get the automatic rollback benefit that CloudFormation gives you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For more detailed background on what the CDK is and how you can install it and bootstrap your own app, I'd recommended starting &lt;a href="https://docs.aws.amazon.com/CDK/latest/userguide/what-is.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I also decided to try out Typescript (instead of plain Javascript) as the CDK was written in Typescript and the strong typing seems to be a good fit for infrastructure resources with complex sets of attributes.&lt;/p&gt;

&lt;p&gt;I started by creating a custom top-level construct called &lt;code&gt;ServiceCicdPipelines&lt;/code&gt; which acts as a container for all the pipelines for each service I'll be creating and any supplementary resources. I've included all the source code in &lt;a href="https://gist.github.com/paulswail/2cdda90261e6d17506114ea00d780eb6" rel="noopener noreferrer"&gt;this gist&lt;/a&gt;, but the main elements of it are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cicd-pipelines-stack&lt;/code&gt; : single CloudFormation stack for deploying all the CICD related resources.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pipelines[]&lt;/code&gt;: List of &lt;code&gt;ServicePipeline&lt;/code&gt; objects — a custom construct which encapsulates the logic for creating a single pipeline for a single Serverless service.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;alertsTopic&lt;/code&gt;: SNS topic which will receive notifications about pipeline errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The custom &lt;code&gt;ServicePipeline&lt;/code&gt; construct is where most of the logic lies. It takes a &lt;code&gt;ServiceDefinition&lt;/code&gt; object as a parameter, which looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ServiceDefinition&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;githubRepo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;githubOwner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;githubTokenSsmPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** Permissions that CodeBuild role needs to assume to deploy serverless stack */&lt;/span&gt;
    &lt;span class="nl"&gt;deployPermissions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This object is where all the unique attributes of the service being deployed are defined. At the moment, I don’t have many options in here but I expect to add to this over time.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ServicePipeline&lt;/code&gt; construct is also where each stage of the pipeline is defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ServicePipeline&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PipelineFailedAlert&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ServicePipelineProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pipelineName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceTrigger&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pipelineName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;pipelineName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// Read Github Oauth token from SSM Parameter Store&lt;/span&gt;
        &lt;span class="c1"&gt;// https://docs.aws.amazon.com/codepipeline/latest/userguide/GitHub-rotate-personal-token-CLI.html&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;oauth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SecretParameter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GithubPersonalAccessToken&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;ssmParameter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;githubTokenSsmPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sourceAction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GitHubSourceAction&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;actionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sourceTrigger&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;SourceTrigger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PullRequest&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GitHub_SubmitPR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GitHub_PushToMaster&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;githubOwner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;githubRepo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;master&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;oauthToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;oauth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;outputArtifactName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SourceOutput&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addStage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Source&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;sourceAction&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// Create stages for DEV =&amp;gt; STAGING =&amp;gt; PROD.&lt;/span&gt;
        &lt;span class="c1"&gt;// Each stage defines its own steps in its own build file&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buildProject&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ServiceCodebuildProject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buildProject&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;pipelineName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_dev`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;buildSpec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buildspec.dev.yml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;deployerRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRoleArnForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dev&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;deploymentTargetAccounts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dev&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buildAction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;buildProject&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toCodePipelineBuildAction&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;actionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Build_Deploy_DEV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;inputArtifact&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sourceAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputArtifact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;outputArtifactName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sourceOutput&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;additionalOutputArtifactNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;devPackage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stagingPackage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;prodPackage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addStage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Build_Deploy_DEV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;buildAction&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stagingProject&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ServiceCodebuildProject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deploy-staging&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;pipelineName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_staging`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;buildSpec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buildspec.staging.yml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;deployerRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRoleArnForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;staging&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;deploymentTargetAccounts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;staging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stagingAction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;stagingProject&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toCodePipelineBuildAction&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;actionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy_STAGING&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;inputArtifact&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sourceAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputArtifact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;additionalInputArtifacts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="nx"&gt;buildAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;additionalOutputArtifact&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stagingPackage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addStage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy_STAGING&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;stagingAction&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// Prod stage requires cross-account access as codebuild isn't running in same account&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prodProject&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ServiceCodebuildProject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deploy-prod&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;pipelineName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_prod`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;buildSpec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buildspec.prod.yml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;deployerRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRoleArnForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;prod&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;deploymentTargetAccounts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prodAction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;prodProject&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toCodePipelineBuildAction&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;actionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy_PROD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;inputArtifact&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sourceAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputArtifact&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;additionalInputArtifacts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="nx"&gt;buildAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;additionalOutputArtifact&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;prodPackage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addStage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy_PROD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;prodAction&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// Wire up pipeline error notifications&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alertsTopic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PipelineFailedAlert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pipeline-failed-alert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;alertsTopic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;alertsTopic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ServicePipelineProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cm"&gt;/** Information about service to be built &amp;amp; deployed (source repo, etc) */&lt;/span&gt;
    &lt;span class="nl"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ServiceDefinition&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** Trigger on PR or Master merge?  */&lt;/span&gt;
    &lt;span class="nl"&gt;sourceTrigger&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SourceTrigger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** Account details for where this service will be deployed to */&lt;/span&gt;
    &lt;span class="nl"&gt;deploymentTargetAccounts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;DeploymentTargetAccounts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** Optional SNS topic to send pipeline failure notifications to */&lt;/span&gt;
    &lt;span class="nl"&gt;alertsTopic&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;Topic&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cm"&gt;/** Wrapper around the CodeBuild Project to set standard props and create IAM role */&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ServiceCodebuildProject&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;buildRole&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Project&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ServiceCodebuildActionProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buildRole&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ServiceDeployerRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;project-role&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;deployerRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployerRoleArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nx"&gt;buildRole&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Project&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;build-project&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;projectName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// minutes&lt;/span&gt;
            &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;buildImage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LinuxBuildImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;UBUNTU_14_04_NODEJS_8_11_0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CodePipelineSource&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="na"&gt;buildSpec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buildSpec&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;buildspec.yml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buildRole&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Quick note on pipeline code organisation
&lt;/h4&gt;

&lt;p&gt;I created all these CDK custom constructs within a shared &lt;code&gt;autochart-infrastructure&lt;/code&gt; repo which I use to define shared, cross-cutting infrastructure resources that are used by multiple services.&lt;br&gt;
The buildspec files that I cover below live in the same repository as their service (which is how it should be). I’m not totally happy with having the pipeline definition and the build scripts in separate repos and I will probably look at a way of moving my reusable CDK constructs into a shared library and have the source for the CDK apps defined in the service-specific repos while referencing this library.&lt;/p&gt;
&lt;h2&gt;
  
  
  Writing the CodeBuild scripts
&lt;/h2&gt;

&lt;p&gt;In the above pipeline definition code, you may have noticed that each deployment stage has its own buildspec file (using the naming convention &lt;code&gt;buildspec.&amp;lt;stage&amp;gt;.yml&lt;/code&gt;). I want to minimise the amount of logic within the pipeline itself and keep it solely responsible for orchestration. All the logic will live in the build scripts.&lt;/p&gt;
&lt;h3&gt;
  
  
  Building and deploying to DEV stage
&lt;/h3&gt;

&lt;p&gt;As soon as a push occurs to the master branch in Github, the next pipeline step is to run tests, package and deploy to the DEV stage. This is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# buildspec.dev.yml&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;TARGET_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
    &lt;span class="na"&gt;SLS_DEBUG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
    &lt;span class="na"&gt;DEPLOYER_ROLE_ARN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;arn:aws:iam::&amp;lt;dev_account_id&amp;gt;:role/ac-rest-api-dev-deployer-role'&lt;/span&gt;

&lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pre_build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;chmod +x build.sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh install&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Do some local testing&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh test-unit&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh test-integration&lt;/span&gt;
      &lt;span class="c1"&gt;# Create separate packages for each target environment&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh clean&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh package dev $TARGET_REGION&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh package staging $TARGET_REGION&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh package prod $TARGET_REGION&lt;/span&gt;
      &lt;span class="c1"&gt;# Deploy to DEV and run acceptance tests there&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh deploy dev $TARGET_REGION dist/dev&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh test-acceptance dev&lt;/span&gt;

&lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*'&lt;/span&gt;
  &lt;span class="na"&gt;secondary-artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;devPackage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;base-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./dist/dev&lt;/span&gt;
      &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*'&lt;/span&gt;
    &lt;span class="na"&gt;stagingPackage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;base-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./dist/staging&lt;/span&gt;
      &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*'&lt;/span&gt;
    &lt;span class="na"&gt;prodPackage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;base-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./dist/prod&lt;/span&gt;
      &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few things to note here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;DEPLOYER_ROLE_ARN&lt;/code&gt; environment variable is required which defines a pre-existing IAM role which has the permissions to perform the deployment steps in the target account. We will cover how to set this up later.&lt;/li&gt;
&lt;li&gt;Each build command references a &lt;code&gt;build.sh&lt;/code&gt; file. The reason for this indirection is to make it easier to test the build scripts during development time, as I’ve no way of invoking a buildspec file locally (this is based on the approach Yan Cui recommends in the &lt;a href="https://livevideo.manning.com/module/38_5_2/production-ready-serverless/ci-cd/setting-up-a-ci-cd-pipeline-for-deploying-lambda-functions?" rel="noopener noreferrer"&gt;CI/CD module of his Production-Ready Serverless course&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;I run unit and integration tests before doing the packaging. This runs the Lambda function code locally (in the CodeBuild container) and not via AWS Lambda, though the integration tests may hit existing downstream AWS resources which functions call on to.&lt;/li&gt;
&lt;li&gt;I am creating 3 deployment artifacts here, one for each stage, and including them as output artifacts which can be used by the future stages . I wasn't totally happy with this as it violates best Devops practice of having a single immutable artifact flow through each stage.  My reason for doing this was that the &lt;code&gt;sls package&lt;/code&gt; command that the build.sh file uses requires a specific stage to be provided to it. However, I’ve since learned that &lt;a href="https://medium.com/@bishwash.aryal/serverless-framework-build-immutable-package-for-ci-cd-pipeline-949b49f7df91" rel="noopener noreferrer"&gt;there is a workaround for this&lt;/a&gt;, so I will probably change this soon.&lt;/li&gt;
&lt;li&gt;I run acceptance tests (which hit the newly deployed API Gateway endpoints) as a final step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;build.sh&lt;/code&gt;file is below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail

instruction&lt;span class="o"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"usage: ./build.sh package &amp;lt;stage&amp;gt; &amp;lt;region&amp;gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"/build.sh deploy &amp;lt;stage&amp;gt; &amp;lt;region&amp;gt; &amp;lt;pkg_dir&amp;gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"/build.sh test-&amp;lt;test_type&amp;gt; &amp;lt;stage&amp;gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

assume_role&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEPLOYER_ROLE_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Assuming role &lt;/span&gt;&lt;span class="nv"&gt;$DEPLOYER_ROLE_ARN&lt;/span&gt;&lt;span class="s2"&gt; ..."&lt;/span&gt;
    &lt;span class="nv"&gt;CREDS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws sts assume-role &lt;span class="nt"&gt;--role-arn&lt;/span&gt; &lt;span class="nv"&gt;$DEPLOYER_ROLE_ARN&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--role-session-name&lt;/span&gt; my-sls-session &lt;span class="nt"&gt;--out&lt;/span&gt; json&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$CREDS&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; temp_creds.json
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;node &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"require('./temp_creds.json').Credentials.AccessKeyId"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;node &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"require('./temp_creds.json').Credentials.SecretAccessKey"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SESSION_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;node &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"require('./temp_creds.json').Credentials.SessionToken"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    aws sts get-caller-identity
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

unassume_role&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;unset &lt;/span&gt;AWS_ACCESS_KEY_ID
  &lt;span class="nb"&gt;unset &lt;/span&gt;AWS_SECRET_ACCESS_KEY
  &lt;span class="nb"&gt;unset &lt;/span&gt;AWS_SESSION_TOKEN
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;instruction
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"install"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;npm &lt;span class="nb"&gt;install
&lt;/span&gt;&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test-unit"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;npm run lint
  npm run &lt;span class="nb"&gt;test
&lt;/span&gt;&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test-integration"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Running INTEGRATION tests..."&lt;/span&gt;
  npm run test-integration
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"INTEGRATION tests complete."&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"clean"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; ./dist
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"package"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 3 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;STAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;
  &lt;span class="nv"&gt;REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;
  &lt;span class="s1"&gt;'node_modules/.bin/sls'&lt;/span&gt; package &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nv"&gt;$STAGE&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"./dist/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;STAGE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nv"&gt;$REGION&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"deploy"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 4 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;STAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;
  &lt;span class="nv"&gt;REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;
  &lt;span class="nv"&gt;ARTIFACT_FOLDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$4&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deploying from ARTIFACT_FOLDER=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ARTIFACT_FOLDER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  assume_role
  &lt;span class="c"&gt;# 'node_modules/.bin/sls' create_domain -s $STAGE -r $REGION&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deploying service to stage &lt;/span&gt;&lt;span class="nv"&gt;$STAGE&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;
  &lt;span class="s1"&gt;'node_modules/.bin/sls'&lt;/span&gt; deploy &lt;span class="nt"&gt;--force&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nv"&gt;$STAGE&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nv"&gt;$REGION&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$ARTIFACT_FOLDER&lt;/span&gt;
  unassume_role
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"test-acceptance"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 2 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;STAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Running ACCEPTANCE tests for stage &lt;/span&gt;&lt;span class="nv"&gt;$STAGE&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;
  &lt;span class="nv"&gt;STAGE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$STAGE&lt;/span&gt; npm run test-acceptance
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ACCEPTANCE tests complete."&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"clearcreds"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-eq&lt;/span&gt; 1 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;unassume_role
  &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ./temp_creds.json
&lt;span class="k"&gt;else
  &lt;/span&gt;instruction
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main thing to note in the above script is the &lt;code&gt;assume_role&lt;/code&gt; function which gets called before the deploy command. In order for CodeBuild to deploy to a different AWS account, the &lt;code&gt;sls deploy&lt;/code&gt; command of the serverless framework needs to be running as a role defined in the target account. To do this, the Codebuild IAM role (which is running in the DEV account) needs to assume this role. I’m currently doing this by invoking the &lt;code&gt;aws sts assume-role&lt;/code&gt; command of the AWS CLI to get temporary credentials for this deployment role. However, this seems messy, so if anyone knows a cleaner way of doing this, I’d love to hear it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying to Staging and Production
&lt;/h3&gt;

&lt;p&gt;The script for deploying and running tests against staging is similar to the dev script, but simpler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# buildspec.staging.yml&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;TARGET_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
    &lt;span class="na"&gt;SLS_DEBUG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;DEPLOYER_ROLE_ARN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;arn:aws:iam::&amp;lt;staging_account_id&amp;gt;:role/ac-rest-api-staging-deployer-role'&lt;/span&gt;

&lt;span class="c1"&gt;# Deploy to STAGING stage and run acceptance tests.&lt;/span&gt;
&lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pre_build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;chmod +x build.sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh install&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh deploy staging $TARGET_REGION $CODEBUILD_SRC_DIR_stagingPackage&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./build.sh test-acceptance staging&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key things to note here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use the &lt;code&gt;$CODEBUILD_SRC_DIR_stagingPackage&lt;/code&gt; environment variable to access the directory where the output artifact named &lt;code&gt;stagingPackage&lt;/code&gt; from the last pipeline step is located.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;buildspec.prod.yml&lt;/code&gt; file is pretty much the same as the staging one, except it references the production IAM role and artefact source directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Permissions and Cross-account access
&lt;/h2&gt;

&lt;p&gt;This was probably the hardest part of the whole process. As I mentioned above, the deployment script assumes a pre-existing &lt;code&gt;deployer-role&lt;/code&gt; IAM role that exists in the AWS account of the stage being deployed to.&lt;br&gt;
To set up these roles, I again used the CDK to define a custom construct &lt;code&gt;CrossAccountDeploymentRole&lt;/code&gt; as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRoleProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** account ID where CodePipeline/CodeBuild is hosted */&lt;/span&gt;
    &lt;span class="nl"&gt;deployingAccountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** stage for which this role is being created */&lt;/span&gt;
    &lt;span class="nl"&gt;targetStageName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="cm"&gt;/** Permissions that deployer needs to assume to deploy stack */&lt;/span&gt;
    &lt;span class="nl"&gt;deployPermissions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Creates an IAM role to allow for cross-account deployment of a service's resources.
 */&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CrossAccountDeploymentRole&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nf"&gt;getRoleNameForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-deployer-role`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nf"&gt;getRoleArnForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`arn:aws:iam::&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;accountId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:role/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;CrossAccountDeploymentRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRoleNameForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;deployerRole&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;deployerPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Policy&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;roleName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRoleProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;CrossAccountDeploymentRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRoleNameForService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;targetStageName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="c1"&gt;// Cross-account assume role&lt;/span&gt;
        &lt;span class="c1"&gt;// https://awslabs.github.io/aws-cdk/refs/_aws-cdk_aws-iam.html#configuring-an-externalid&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployerRole&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deployerRole&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;roleName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;assumedBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AccountPrincipal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployingAccountId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;passrole&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PolicyStatementEffect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Allow&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addActions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;iam:PassRole&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;addAllResources&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployerPolicy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Policy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deployerPolicy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;policyName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-policy`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;statements&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;passrole&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployPermissions&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployerPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attachToRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deployerRole&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;props&lt;/code&gt; object that is passed to the constructor contains the specific requirements of the service. The main thing here is the &lt;code&gt;deployPermissions&lt;/code&gt; which is a set of IAM policy statements to enable the user running the Serverless framework’s &lt;code&gt;sis deploy&lt;/code&gt; command to deploy all the necessary resources (Cloudformation stacks, Lambda functions, etc).&lt;/p&gt;

&lt;p&gt;An important thing to note is that this set of permissions is different from the &lt;em&gt;runtime&lt;/em&gt; permissions your serverless service needs to execute. Working out what &lt;em&gt;deploy-time&lt;/em&gt; IAM permissions a service needs is a well-known problem in the serverless space and one that I haven’t yet found a nice solution for. I initially started by giving my role admin access until I got the end-to-end pipeline working, and then worked backwards to add more granular permissions, but this involved a lot of trial and error.&lt;/p&gt;

&lt;p&gt;For more information on how IAMs work with the Serverless framework, I’d recommend reading &lt;a href="https://serverless.com/blog/abcs-of-iam-permissions/" rel="noopener noreferrer"&gt;The ABCs of IAM: Managing permissions with Serverless&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A few tips for building and testing your own pipelines
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Always run the build.sh file locally first to make sure it works on your machine&lt;/li&gt;
&lt;li&gt;Use the "Release Change" button in the CodePipeline console in order to start a new pipeline execution without resorting to pushing dummy commits to Github.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Future Enhancements
&lt;/h2&gt;

&lt;p&gt;Going forward, here are a few additions I’d like to make to my CICD process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html" rel="noopener noreferrer"&gt;manual approval step&lt;/a&gt; before deploying to PROD.&lt;/li&gt;
&lt;li&gt;Add an automatic rollback action if acceptance tests against the PROD deployment fail.&lt;/li&gt;
&lt;li&gt;Creations/updates of Pull Requests should trigger a shorter test-only pipeline. This will help to identify integration issues earlier.&lt;/li&gt;
&lt;li&gt;Slack integration for build notifications.&lt;/li&gt;
&lt;li&gt;Update my package step so that it only builds a single immutable package that will be deployed to all environments.&lt;/li&gt;
&lt;li&gt; Get extra meta, and have a pipeline for the CICD code 🤯. Changes to my pipeline will have tests run against them before deploying a new pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Next up in my migration plan is getting an auth mechanism built into API Gateway that new routes can use. That will be the first real production code being tested by my new CICD pipeline.&lt;/p&gt;

&lt;p&gt;✉️ &lt;strong&gt;&lt;em&gt;If you enjoyed this article and would like to learn more about serverless, you might enjoy &lt;a href="https://winterwindsoftware.com/newsletter/" rel="noopener noreferrer"&gt;my weekly newsletter on building serverless apps in AWS&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;You also might enjoy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/concerns-that-serverless-takes-away/" rel="noopener noreferrer"&gt;Concerns that serverless takes away&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/serverless-definitions/" rel="noopener noreferrer"&gt;The differing definitions of “serverless”&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://winterwindsoftware.com/serverless-glossary/" rel="noopener noreferrer"&gt;A Serverless Glossary&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;strong&gt;&lt;a href="https://winterwindsoftware.com/serverless-cicd-pipelines-with-aws-cdk/" rel="noopener noreferrer"&gt;winterwindsoftware.com&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>codepipeline</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
