DEV Community

Chris Straw
Chris Straw

Posted on • Edited on

File types, purposes, and directory skeleton - Part 5 of Implementing a RESTful API on AWS

Quick recap: this is a soup to nuts series covering implementing a full-featured REST API on AWS from the perspective of a long-time infrastructure engineer with a historical preference for Java-based solutions.

In the previous parts of this series, we (a) set up our build environment; (b) registered our AWS account; (c) created our Administrative users; and (d) stubbed out our default Serverless service.

We'll now cover (I) the various file types we will be using; and (II) our directory structure; while endeavoring to explain the design decisions behind why we are proceeding in the way we are.

I. Various File Types

A. Serverless / CloudFormation Configuration Files

At its core, Serverless Framework is an extremely powerful javascript module that translates a YAML configuration file named serverless.yml into vendor-specific cloud service commands. In the case of AWS, these are AWS CloudFormation commands such as create-function for AWS Lambda or create-table for DynamoDB.

But why introduce this intermediate Serverless Framework instead of just using the AWS CLI? A number of reasons, including: (1) the veneer of avoiding vendor lock-in; (2) the additional variables; and (3) the ability to develop and debug our code offline without using AWS resources.

With that, let's look at the default serverless.yml file that we created in the last part of this series. Switch to your working directory and type code serverless.yml to open the file in VS Code.

1) Initial Serverless.yml Skeleton

Full contents of the default serverless.yml
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
#    docs.serverless.com
#
# Happy Coding!

service: myrestproject
# app and org for use with dashboard.serverless.com
#app: your-app-name
#org: your-org-name

# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
frameworkVersion: '2'

provider:
  name: aws
  runtime: nodejs12.x
  lambdaHashingVersion: 20201221

# you can overwrite defaults here
#  stage: dev
#  region: us-east-1

# you can add statements to the Lambda function's IAM Role here
#  iamRoleStatements:
#    - Effect: "Allow"
#      Action:
#        - "s3:ListBucket"
#      Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ]  }
#    - Effect: "Allow"
#      Action:
#        - "s3:PutObject"
#      Resource:
#        Fn::Join:
#          - ""
#          - - "arn:aws:s3:::"
#            - "Ref" : "ServerlessDeploymentBucket"
#            - "/*"

# you can define service wide environment variables here
#  environment:
#    variable1: value1

# you can add packaging information here
#package:
#  include:
#    - include-me.js
#    - include-me-dir/**
#  exclude:
#    - exclude-me.js
#    - exclude-me-dir/**

functions:
  hello:
    handler: handler.hello
#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
#    events:
#      - httpApi:
#          path: /users/create
#          method: get
#      - websocket: $connect
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill: amzn1.ask.skill.xx-xx-xx-xx
#      - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp
#      - alb:
#          listenerArn: arn:aws:elasticloadbalancing:us-east-1:XXXXXX:listener/app/my-load-balancer/50dc6c495c0c9188/
#          priority: 1
#          conditions:
#            host: example.com
#            path: /hello

#    Define function environment variables here
#    environment:
#      variable2: value2

# you can add CloudFormation resource templates here
#resources:
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: my-new-bucket
#  Outputs:
#     NewOutput:
#       Description: "Description for the output"
#       Value: "Some output value"
Enter fullscreen mode Exit fullscreen mode

You'll see that most of the default file consists of commented out formatted examples, with only 9 lines of actual configuration in YAML format:

service: myrestproject
frameworkVersion: '2'

provider:
  name: aws
  runtime: nodejs12.x
  lambdaHashingVersion: 20201221
functions:
  hello:
    handler: handler.hello
Enter fullscreen mode Exit fullscreen mode

Each of these is a key/value pair. An important point to remember is that the key names, like variable names, will be constants that cannot be dynamically set. Rather, the names of the constants will--depending on the situation--be set by either: (a) serverless; (b) AWS; or (c) you the end user. The values, on the other hand, will often be dynamically set.

Let's cover each of the nine functional lines in the default configuration file:


service: myrestproject
Enter fullscreen mode Exit fullscreen mode

a) service

The value of the service key will be used to generate the name your AWS deployment bundle. AWS refers to these--unsurprisingly--as a 'stack'.

Serverless follows the recommended practice of appending a development stage modifier to the base stack name. By default, this value is 'dev', giving us a full stack name of 'myrestproject-dev'.

You can see the list of deployed stacks through both the web UI (be sure your region in the top right is set correctly):

https://console.aws.amazon.com/cloudformation/

and the AWS CLI:

$ aws cloudformation list-stacks
Enter fullscreen mode Exit fullscreen mode

frameworkVersion: '2'
Enter fullscreen mode Exit fullscreen mode

b) frameworkVersion

The version of the Serverless Framework. The supplied value here is pinned to any version 2 of the Framework. No need to change it.


provider:
  name: aws
  runtime: nodejs12.x
  lambdaHashingVersion: 20201221
Enter fullscreen mode Exit fullscreen mode

c) provider

Provider is our first structured entry. All of our vendor-specific entries will be children below this key. There are a number of additional ones, but we are only covering the basic ones for now:

i) name

The name of our cloud service vendor: 'aws'. Other options would include 'azure' or whatever that thing is that Google offers.

ii) runtime

The default version of Node.js needed to run our functions. (We can override this on a per-function basis on the off-chance it is necessary). AWS claims to support Node.js version 14, but at least some features were not working correctly last time I checked in March 2021 (e.g. nullish coalescing). Go ahead and keep it as nodejs12.x for the time being.

iii) lambdaHashingVersion

The Serverless Framework occasionally deprecates and changes underlying functionality. In this instance, they apparently are transitioning the underlying hash function used to determine if a function has been altered. Since we are just starting out, we should switch right now to what will be the default in Serverless Framework 3.0 by setting this value to 20201221 as indicated in the default serverless file.


functions:
  hello:
    handler: handler.hello
Enter fullscreen mode Exit fullscreen mode

d) functions

Back up to the top level, functions contains a structured description of our lambda functions. Each function will be its own entry.

i) hello

This is a developer-selected name used for referencing the function elsewhere in our serverless.yml and AWS CloudFormation configuration files. Here, the provided name is hello, but it could be yolo or mydogspot or something else, subject to the caveat noted before that key names are required to be constants (e.g. it cannot be named ${self.custom.myLambdaName}).

(1) handler

The javascript function that will process executions of this lambda function. The format is <filename>.<function name> (e.g. handler.hello points to the hello function in handler.js within the base directory, whereas ./src/aws/AWSController.findOne points at the findOne function in AWSController.js within the ./src/aws/ directory).

2) Modified Serverless.yml

With this basic understanding of our serverless.yml configuration file out of the way, let's start putting together a working REST template.

Full contents of the modified serverless.yml
# app and org for integration with dashboard.serverless.com
# Note that using this will cause Serverless to inject a logging handler proxy
# before the calls to your lambda functions, resulting in the renaming of your handler
# in the AWS interface
#org: <your_org_name_here>
#app: <your_dashboard_appname_here>

# name of our microservice
service: ${self:custom.resourceName}Service

frameworkVersion: '>=2.0'
variablesResolutionMode: 20210219

#############################################################################################
# CUSTOM VARIABLES - all variables necessary to get up and running should be in this section
#############################################################################################
custom:
  resourceName: teams
  # Email address of primary contact; embedded into internal "Tags" to aid in
  # internal maintenance
  primaryContact: <YOUR EMAIL ADDRESS>
  region: <YOUR AWS REGION>

  dynamodb:
    # For tables, I have adopted a naming convention of <SERVICE>-Table-<EntityName>-<Stage>; e.g. SportsApp-Table-Players-dev
    resourceTable: ${self:service}-Table-${self:custom.resourceName}-${self:provider.stage}
    # For indexes, I have adopted a naming convention of <SERVICE>-TableIdx-<EntityName>-<Constraint/SearchKey>-<Stage>; e.g. SportsApp-TableIdx-Players-LastName-dev
    resourceTableIdx: ${self:service}-TableIdx-${self:custom.resourceName}-Name-${self:provider.stage}

provider:
  name: aws
  # AWS claims they support NodeJS14, but as of March 2021, certain Node14 features don't appear to function correctly, e.g. nullish coalescing
  runtime: nodejs14.x
  lambdaHashingVersion: 20201221
  stage: dev  
  region: ${self:custom.region}

  apiGateway:
    # setting required for the transition from Serverless Framework 2.0 to 3.0+
    shouldStartNameWithService: true

    # we can have AWS API Gateway validate incoming requests prior to sending it to our lambda,
    # saving us the processing cost for malformed requests
    request:
      schemas:
        resource-create-model:
          name: ${self:custom.resourceName}CreateModel
          schema: ${file(src/schema/createResourceSchema.json)}
          description: "A Model validation for creating ${self:custom.resourceName}"

  # Environment variables our Javascript code relies on
  environment:
    NODE_ENV: ${self:provider.stage}
    TABLE_NAME_TEAM: ${self:custom.dynamodb.resourceTable}
    TABLE_IDX_TEAM_NAME: ${self:custom.dynamodb.resourceTableIdx}

  # Base IAm role that the lambdas will execute with
  iam: ${file(./resources/aws/iam/LambdaRole.yml)}

# Serverless Framework plugins
plugins:
  # support for typescript
  - serverless-plugin-typescript

  # Allows us to use a local instance during development, eliminating need
  # to publish to cloud to see changes to functions and allowing use of a debugger
  - serverless-offline

# define all of our REST microservice endpoints
functions: ${file(./resources/aws/RESTFunctions.yml)}

# Settings relating to how our module will be bundled for execution
package:
  # Don't include node_modules like typescript and unit testing when
  # publishing our bundle to the cloud; results in notably smaller upload bundle
  excludeDevDependencies: true

resources:
  # Values that we want to export out of this service for use in other services
  # Including an export name pollutes the global namespace and is only necessary
  # when you don't know this Stack's name, as you can otherwise obtain the value
  # using the key name you chose below, e.g. 'ApiGatewayRestApiExport'
  Outputs:
    ApiGatewayRestApiExport:
      Value: !Ref 'ApiGatewayRestApi'
#      Export:
#        Name: ${self:service}-${self:provider.stage}-api

  # The various resources can be given custom developer-friendly names, as shown below.
  # What the resource consists of is determined by the 'type' sub-property found in the files
  Resources:

    ############################################################
    # DATABASE RESOURCES
    ############################################################
    # Data table for this microserver
    ResourceTable:  ${file(./resources/aws/dynamodb/ResourceTable.yml)}



Enter fullscreen mode Exit fullscreen mode

a) Change our service name

Change the service value from myrestproject to

service: ${self:custom.resourceName}Service
Enter fullscreen mode Exit fullscreen mode

Note that for this value, we have introduced for the first time the use of a variable value. These come in the format: ${source:key}. For references to the present serverless file, the source is self. So the value for our service key is now going to consist of the value located at custom.resourceName concatenated with 'Service'. Note that we are referencing a value that has not yet been defined at this point in the file--it is okay if it is defined further down in the file. So let's get to it.

b) 'custom' top level entry

The Serverless Framework defines at top level of our YAML hierarchy a custom key, which can be used to hold many of our installation-specific values. The various key-value pairs in this custom section are user-defined however we see fit.

We'll go ahead and add the following:

#############################################################################################
# CUSTOM VARIABLES - all variables necessary to get up and running should be in this section
#############################################################################################
custom:
  # Name of our REST resource, e.g. 'players', 'team', 'league', etc.
  resourceName: teams
  # Email address of primary contact; embedded into internal "Tags" to aid in
  # internal maintenance
  primaryContact: <YOUR EMAIL ADDRESS>
  region: <YOUR AWS REGION>

  dynamodb:
    # For tables, I have adopted a naming convention of <SERVICE>-Table-<EntityName>-<Stage>; e.g. SportsApp-Table-Players-dev
    resourceTable: ${self:service}-Table-${self:custom.resourceName}-${self:provider.stage}
    # For indexes, I have adopted a naming convention of <SERVICE>-TableIdx-<EntityName>-<Constraint/SearchKey>-<Stage>; e.g. SportsApp-TableIdx-Players-LastName-dev
    resourceTableIdx: ${self:service}-TableIdx-${self:custom.resourceName}-Name-${self:provider.stage}
Enter fullscreen mode Exit fullscreen mode

For this example, we'll create a RESTful API for a 'teams' resource. (Note the use of plural and lowercase). This means the service key we defined earlier will be set to 'teamsService'.

Be sure to put in your email address and your AWS region (e.g. us-east-2).

We have also defined a key-value branch called dynamodb (again, we could have called it anything, like myNotSqlParameters) with two child values: (1) the eventual name of our database "table"; and (2) the eventual name of our "table"'s uniqueness index (we'll explain this later when discussion data storage).

You'll recall that ${self:service} = teamsService and ${self:custom.resourceName} = teams and the default stage is dev so:

resourceTable: ${self:service}-Table-${self:custom.resourceName}-${self:provider.stage}
Enter fullscreen mode Exit fullscreen mode

will become

resourceTable: teamsService-Table-teams-dev
Enter fullscreen mode Exit fullscreen mode

By including the stage (dev) in name of all deployed resource such as the table and index names, we ensure our development environment won't interfere with our production (prod) environment.

c) 'provider'

We are now going to add a few more keys to the provider section:

provider:
  name: aws
  runtime: nodejs14.x
  lambdaHashingVersion: 20201221
  stage: dev  
  region: ${self:custom.region}

  apiGateway:
    # setting required for the transition from Serverless Framework 2.0 to 3.0+
    shouldStartNameWithService: true


  # Environment variables our Javascript code relies on
  environment:
    NODE_ENV: ${self:provider.stage}
    TABLE_NAME_TEAM: ${self:custom.dynamodb.resourceTable}
    TABLE_IDX_TEAM_NAME: ${self:custom.dynamodb.resourceTableIdx}

  # Base IAm role that the lambdas will execute with
  iam: ${file(./resources/aws/iam/LambdaRole.yml)}
Enter fullscreen mode Exit fullscreen mode

To start, we now explicitly set the stage to dev,

  stage: dev  
Enter fullscreen mode Exit fullscreen mode

which you will recall is the default. We can override this during deployment using command line parameters:

$ serverless deploy --stage prod
Enter fullscreen mode Exit fullscreen mode

We next introduce for the first time the re-use of our custom parameters from higher in the file by setting region using the previously defined ${self.custom.region}.

  region: ${self:custom.region}
Enter fullscreen mode Exit fullscreen mode

Again, this lets us keep most of our deployment configuration in one place under custom.

This is followed by another Serverless transition variable.

apiGateway:
    shouldStartNameWithService: true
Enter fullscreen mode Exit fullscreen mode

Starting in 3.0, the naming convention in the deployments will change from {stage}-{service} to {service}-{stage}. We should just adopt this approach from the get-go.

Next we set environment variables that will be passed into the Node.js runtime environment. This is a means for communicating our configuration to our javascript code

  environment:
    NODE_ENV: ${self:provider.stage}
    TABLE_NAME_TEAM: ${self:custom.dynamodb.resourceTable}
    TABLE_IDX_TEAM_NAME: ${self:custom.dynamodb.resourceTableIdx}
Enter fullscreen mode Exit fullscreen mode

Our javascript code will be able to look these up by examining the runtime environment variables. We pass: (1) the deployment stage; (2) our "table" name; and (3) our "table" uniqueness index name.

Finally, we add our first IAm configuration.

  # Base IAm role that the lambdas will execute with
  iam: ${file(./resources/aws/iam/LambdaRole.yml)}
Enter fullscreen mode Exit fullscreen mode

Notably, we do this by introducing the file import concept. Serverless allows us to break individual portions of our base configuration file into discrete files. Separating our configuration file into subfiles provides a number of benefits, including: (1) more granular version control; (2) smaller more readable files; and (3) a consistent, known location for finding settings in any given module.

We'll go over the contents of this file in a bit, but first let's stay focused on our serverless.yml.

d) plugins

We are next going to add a new top-level entry called plugins. This key contains a yaml list of Serverless Framework extensions used by our project. I generally try to keep these to a minimum due to inconsistency in the maintenance frequency of the various plugins. The two I find absolutely essential are (a) serverless-plugin-typescript, which provides TypeScript support on top of Javascript; and (b) serverless-offline, which allows offline development of our lambda modules.

# Serverless Framework plugins
plugins:
  # support for typescript
  - serverless-plugin-typescript

  # Allows us to use a local instance during development, eliminating need
  # to publish to cloud to see changes to functions and allowing use of a debugger
  - serverless-offline
Enter fullscreen mode Exit fullscreen mode

e) functions

You'll recall this previously consisted of a structured key-value tree listing the base hello function. We are going to replace this with a file import.

# define all of our REST microservice endpoints
functions: ${file(./resources/aws/RESTFunctions.yml)}
Enter fullscreen mode Exit fullscreen mode

f) package

Another new top-level key. This defines setting relating to the packaging of our stack for deployment. The only value we will include is excludeDevDependencies, which will filter out of our deployment modules relating to our precompiler (Typescript) and our testing harness (Jest). (Note that this value is true by default, so this value is functionally useless as set.)

# Settings relating to how our module will be bundled for execution
package:
  # Don't include node_modules like typescript and unit testing when
  # publishing our bundle to the cloud; results in notably smaller upload bundle
  excludeDevDependencies: true
Enter fullscreen mode Exit fullscreen mode

g) resources

Finally, our last top level key. Various catch-all resource creation happens here, including the definition of our "table". (Later on, we are going to define our user authentication here as well but we'll skip this for now).

resources:
  # Values that we want to export out of this service for use in other services
  # Including an export name pollutes the global namespace and is only necessary
  # when you don't know this Stack's name, as you can otherwise obtain the value
  # using the key name you chose below, e.g. 'ApiGatewayRestApiExport'
  Outputs:
    ApiGatewayRestApiExport:
      Value: !Ref 'ApiGatewayRestApi'
#      Export:
#        Name: ${self:service}-${self:provider.stage}-api

  # The various resources can be given custom developer-friendly names, as shown below.
  # What the resource consists of is determined by the 'type' sub-property found in the files
  Resources:

    ############################################################
    # DATABASE RESOURCES
    ############################################################
    # Data table for this microserver
    ResourceTable:  ${file(./resources/aws/dynamodb/ResourceTable.yml)}
Enter fullscreen mode Exit fullscreen mode
i) Outputs

Here we instruct AWS what values should be "exported" from our stack. This is done in either of two ways: (1) we can make a value available in our AWS global namespace; or (2) we can make a value available as a parameter attached to our stack name. I personally hate polluting the global namespace and see little value to it, so I take the second approach (but leave an example of the 1st approach commented out in the code).

  Outputs:
    ApiGatewayRestApiExport:
      Value: !Ref 'ApiGatewayRestApi'
Enter fullscreen mode Exit fullscreen mode

Here, we define an export key named ApiGatewayRestApiExport--this could have been anything we wanted, such as MyDogSpot. For the value, we for the first time use the !Ref command. The !Ref command instructs AWS to inject the as-deployed unique name of our ApiGatewayRestAPI object.

Now, if you've been paying attention, you might be saying right now "WHAT ApiGatewayRestApi object!??" When deploying our stack, Serverless creates a resource in AWS's system named 'ApiGatewayRestApi' that routes requests to our lambda. By exporting its value, we will be able to get a handle to it at a future stage.

i) Resources

Somewhat confusingly, the resources key has a child named Resources.

  Resources:

    ############################################################
    # DATABASE RESOURCES
    ############################################################
    # Data table for this microserver
    ResourceTable:  ${file(./resources/aws/dynamodb/ResourceTable.yml)}
Enter fullscreen mode Exit fullscreen mode

As noted in the comments, each 'Resources' entry will be of a particular type as indicated by a Type key with a value such as AWS::DynamoDB::Table. These would normally appear here, but because we are using file imports, they will instead be in our sub-files.

With that, our final serverless.yml file looks like this for the time being:

# app and org for integration with dashboard.serverless.com
# Note that using this will cause Serverless to inject a logging handler proxy
# before the calls to your lambda functions, resulting in the renaming of your handler
# in the AWS interface
#org: <your_org_name_here>
#app: <your_dashboard_appname_here>

# name of our microservice
service: ${self:custom.resourceName}Service

frameworkVersion: '>=2.0'
variablesResolutionMode: 20210219

#############################################################################################
# CUSTOM VARIABLES - all variables necessary to get up and running should be in this section
#############################################################################################
custom:
  resourceName: teams
  # Email address of primary contact; embedded into internal "Tags" to aid in
  # internal maintenance
  primaryContact: <YOUR EMAIL ADDRESS>
  region: <YOUR AWS REGION>

  dynamodb:
    # For tables, I have adopted a naming convention of <SERVICE>-Table-<EntityName>-<Stage>; e.g. SportsApp-Table-Players-dev
    resourceTable: ${self:service}-Table-${self:custom.resourceName}-${self:provider.stage}
    # For indexes, I have adopted a naming convention of <SERVICE>-TableIdx-<EntityName>-<Constraint/SearchKey>-<Stage>; e.g. SportsApp-TableIdx-Players-LastName-dev
    resourceTableIdx: ${self:service}-TableIdx-${self:custom.resourceName}-Name-${self:provider.stage}

provider:
  name: aws
  # AWS claims they support NodeJS14, but as of March 2021, certain Node14 features don't appear to function correctly, e.g. nullish coalescing
  runtime: nodejs14.x
  lambdaHashingVersion: 20201221
  stage: dev  
  region: ${self:custom.region}

  apiGateway:
    # setting required for the transition from Serverless Framework 2.0 to 3.0+
    shouldStartNameWithService: true

    # we can have AWS API Gateway validate incoming requests prior to sending it to our lambda,
    # saving us the processing cost for malformed requests
    request:
      schemas:
        resource-create-model:
          name: ${self:custom.resourceName}CreateModel
          schema: ${file(src/schema/createResourceSchema.json)}
          description: "A Model validation for creating ${self:custom.resourceName}"

  # Environment variables our Javascript code relies on
  environment:
    NODE_ENV: ${self:provider.stage}
    TABLE_NAME_TEAM: ${self:custom.dynamodb.resourceTable}
    TABLE_IDX_TEAM_NAME: ${self:custom.dynamodb.resourceTableIdx}

  # Base IAm role that the lambdas will execute with
  iam: ${file(./resources/aws/iam/LambdaRole.yml)}

# Serverless Framework plugins
plugins:
  # support for typescript
  - serverless-plugin-typescript

  # Allows us to use a local instance during development, eliminating need
  # to publish to cloud to see changes to functions and allowing use of a debugger
  - serverless-offline

# define all of our REST microservice endpoints
functions: ${file(./resources/aws/RESTFunctions.yml)}

# Settings relating to how our module will be bundled for execution
package:
  # Don't include node_modules like typescript and unit testing when
  # publishing our bundle to the cloud; results in notably smaller upload bundle
  excludeDevDependencies: true

resources:
  # Values that we want to export out of this service for use in other services
  # Including an export name pollutes the global namespace and is only necessary
  # when you don't know this Stack's name, as you can otherwise obtain the value
  # using the key name you chose below, e.g. 'ApiGatewayRestApiExport'
  Outputs:
    ApiGatewayRestApiExport:
      Value: !Ref 'ApiGatewayRestApi'
#      Export:
#        Name: ${self:service}-${self:provider.stage}-api

  # The various resources can be given custom developer-friendly names, as shown below.
  # What the resource consists of is determined by the 'type' sub-property found in the files
  Resources:

    ############################################################
    # DATABASE RESOURCES
    ############################################################
    # Data table for this microserver
    ResourceTable:  ${file(./resources/aws/dynamodb/ResourceTable.yml)}



Enter fullscreen mode Exit fullscreen mode

Before we go on to define the subfiles of our configuration, let's first talk about some of the other file types we will be dealing with.

B. Dependencies / Package.json

As discussed in the previous post, npm uses a JSON-based package.json manifest file to track information about our project dependencies. This file will be located in our root directory. The key parts are as follows:

1) name, version, description

These are self-explanatory, being (a) the name of our application; (b) a version identifier; and (c) a textual description.

2) main

The primary entry point of our module.

3) scripts

Now things begin to get interesting. scripts provides us a way to define a list of commands that we can run using the command npm run <command>. For example, npm run coverage or npm run deploy.

4) devDependencies

A list of dependencies that are necessary during the compilation and testing phases of development, such as modules to support Typescript, Serverless Framework offline testing, and unit tests.

Although we can add modules to this list by hand, the normal method is through the command:

$ npm i --save-dev <module>
Enter fullscreen mode Exit fullscreen mode

Note the use of the flag --save-dev. This tells NPM that the module is a development dependency, rather than a runtime deployment dependency. If you forget to use the flag, just go ahead and run the command again with the flag and NPM will move the module to the appropriate part of the configuration file.

NPM will install each of these modules in the "node_modules" subdirectory of our project.

I use the following as a baseline:

a) Serverless modules

The following adds Serverless Framework support, offline Serverless Framework testing and debugging, and Serverless Framework Typescript support.

$ npm i --save-dev serverless
$ npm i --save-dev serverless-offline
$ npm i --save-dev serverless-plugin-typescript
Enter fullscreen mode Exit fullscreen mode

b) Typescript modules

The first of the following adds Typescript support. The rest provide Typescript "definitions" for various Javascript modules that do not necessarily have Typescript-support built into them.

$ npm i --save-dev typescript
$ npm i --save-dev @types/aws-lambda
$ npm i --save-dev @types/jest
$ npm i --save-dev @types/node
$ npm i --save-dev @types/uuid
Enter fullscreen mode Exit fullscreen mode

c) Jest Unit Testing modules

The first includes Jest as our unit tester. The second adds dynamodb support to Jest. The rest allow Jest to use the "babel" javascript compiler, along with babel support for environment variables and typescript.

$ npm i --save-dev jest
$ npm i --save-dev @shelf/jest-dynamodb

$ npm i --save-dev babel-jest
$ npm i --save-dev @babel/core
$ npm i --save-dev @babel/preset-env
$ npm i --save-dev @babel/preset-typescript
Enter fullscreen mode Exit fullscreen mode

5) dependencies

A list of dependencies required by our module at runtime. We ideally want to keep this list small. We add items to this list by running the npm i command without the --save-dev flag. e.g.

$ npm i dynamoose
Enter fullscreen mode Exit fullscreen mode

I include only dynamoose (a framework for interacting with DynamoDB) and uuid (a library for generating unique identifiers).

$ npm i dynamoose
$ npm i uuid
Enter fullscreen mode Exit fullscreen mode

6) Final package.json file

Our package.json file ends up looking like:

{
  "name": "teamMicroservice",
  "version": "1.0.0",
  "description": "This is the Team Microservice",
  "main": "index.js",
  "scripts": {
    "lint": "tslint -p tsconfig.json -c tslint.json",
    "local": "serverless offline",
    "deploy": "serverless deploy",
    "test": "jest",
    "coverage": "jest --coverage",
    "clean": "git clean -fXd -e \\!node_modules -e \\!node_modules/**/*"
  },
  "devDependencies": {
    "@babel/core": "^7.13.10",
    "@babel/preset-env": "^7.13.12",
    "@babel/preset-typescript": "^7.13.0",
    "@shelf/jest-dynamodb": "github:shelfio/jest-dynamodb",
    "@types/aws-lambda": "^8.10.51",
    "@types/jest": "^26.0.21",
    "@types/node": "^14.0.23",
    "@types/uuid": "^8.3.0",
    "babel-jest": "^26.6.3",
    "jest": "^26.6.3",
    "serverless": "^2.30.3",
    "serverless-offline": "^6.8.0",
    "serverless-plugin-typescript": "^1.1.9",
    "typescript": "^3.8.3"
  },
  "dependencies": {
    "dynamoose": "^2.7.3",
    "uuid": "^8.3.2"
  }
}
Enter fullscreen mode Exit fullscreen mode

C. launch.json

We'll next briefly touch on this file, which does not yet exist. The launch.json file is used to configure the Visual Studio Code debugger. Later on, we will define multiple debug configurations, including ones to launch (1) Serverless offline in the debugger; (2) all of our Jest unit tests; and (3) debugging our current Jest file. For now, just know that it will exist and we will customize it when we get into our debugger configuration.

D. jest.config.js

Jest is our unit test harness. This file will contain its configuration settings. To create it, we run:

$ jest --init
The following questions will help Jest to create a suitable configuration for your project

? Would you like to use Jest when running "test" script in "package.json"? › (Y/n) Y
? Would you like to use Typescript for the configuration file? › (y/N) N
? Choose the test environment that will be used for testing › - Use arrow-keys. Return to submit.
❯   node
    jsdom (browser-like)
? Do you want Jest to add coverage reports? (y/N) N
? Which provider should be used to instrument code for coverage? › - Use arrow-keys. Return to submit.
❯   v8
    babel
? Automatically clear mock calls and instances between every test? › (y/N) Y
  Modified <MYPATH>/MyRESTProject/package.json

  Configuration file created at <MYPATH>/MyRESTProject/jest.config.js
Enter fullscreen mode Exit fullscreen mode

We'll configure it further later to add DynamoDB support. For now, let's move on.

II. Our Directory Structure

Like those of you coming from a Spring Maven world, I generally prefer a project hierarchy consisting of:

|____src
| |____main
| | |____resources
| | | | . . .
| | |____java
| | | | . . .
| |____test
| | | . . .
|____target
| | . . .
Enter fullscreen mode Exit fullscreen mode

Although we will try to keep the source/resource directory distinction, we will be leaving behind the distinct test tree. The reason behind this is that with a very concise microservice, we will be placing the unit test for each file in the same directory as the source. Later on, we will introduce a separate test tree that will contain only our integration tests.

Our new directory structure will be akin to:

.
|____resources
| |____aws
| | |____dynamodb
| | | |____ResourceTable.yml
| | |____RESTFunctions.yml
| | |____iam
| | | |____LambdaRole.yml
|____src
| |____utils
| | |____Response.ts
| | |____rest
| | | |____responses
| | | | |____ . . . .
| | | |____exceptions
| | | | |____ . . . .
| |____schema
| | |____ . . . .
| |____model
| | |____Team.unit.test.ts
| | |____Team.ts
| |____aws
| | |____AWSRestController.ts
| | |____AWSTeamController.ts
| | |____AWSTeamController.unit.test.ts
| |____service
| | |____TeamService.unit.test.ts
| | |____TeamService.ts
Enter fullscreen mode Exit fullscreen mode

A. src Folder

We will store our various typescript files in a src folder. I'm still somewhat wed to the model/view/controller structure, which influences my tree selection. From our base project folder:

$ mkdir -p src/aws/
$ mkdir -p src/model/
$ mkdir -p src/schema/
$ mkdir -p src/service/
$ mkdir -p src/utils/rest/responses
$ mkdir -p src/utils/rest/exceptions
Enter fullscreen mode Exit fullscreen mode

We'll discuss our various source files and their contents in the next post. For now, let's finish our initial resources files.

B. resources Folder

We will be storing our various serverless.yml configuration files in a resources folder. Under this folder, we'll be creating subfolders for our various AWS components. From our base project folder:

$ mkdir -p resources/aws/dynamodb
$ mkdir -p resources/aws/iam
Enter fullscreen mode Exit fullscreen mode

1. ResourceTable.yml

You'll recall that in our serverless.yml file, we declared our DynamoDB resource as being defined in the file resources/aws/dynamodb/ResourceTable.yml. Let's go ahead and create this file and set its contents as:

Type: AWS::DynamoDB::Table
Properties:
  TableName: ${self:custom.dynamodb.resourceTable}
  ProvisionedThroughput:
    ReadCapacityUnits: 1
    WriteCapacityUnits: 1
  AttributeDefinitions:
    - AttributeName: id
      AttributeType: S
    - AttributeName: name
      AttributeType: S
    - AttributeName: sortField
      AttributeType: S
  KeySchema:
    - AttributeName: id
      KeyType: HASH
    - AttributeName: sortField
      KeyType: RANGE       
  LocalSecondaryIndexes:
    - IndexName: ${self:custom.dynamodb.resourceTableIdx}
      KeySchema:
          - AttributeName: id
            KeyType: HASH
          - AttributeName: name
            KeyType: RANGE
      Projection:
        ProjectionType: KEYS_ONLY
Enter fullscreen mode Exit fullscreen mode

We'll go into detail on this later.

2. LambdaRole.yml

Next, we create our resources/aws/iam/LambdaRole.yml file:

# This is the permissions that our lambda functions need
role:
  # grant access to dynamoDB table and associated table index
  statements:
    - Effect: "Allow"
      Action:
        - "dynamodb:Query"
        - "dynamodb:PutItem"
        - "dynamodb:Scan"
        - "dynamodb:DeleteItem"
        - "dynamodb:UpdateItem"
      Resource:
        - !GetAtt ResourceTable.Arn
        - !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${self:custom.dynamodb.resourceTable}/index/*    
Enter fullscreen mode Exit fullscreen mode

3. RestFunctions.yml

Followed by our resources/aws/RestFunctions.yml

# For example purposes, finding all entities is an unrestricted operation open to the entire world
findAll:
  handler: ./src/aws/AWSTeamController.findAll 
  events:
    - http:
        path: /${self:custom.resourceName}  
        method: get 
        cors: true
        # solely for creating a pretty label; sets OperationName in ApiGateway, useful for e.g. Swagger API documentation          
        operationId: getTeams 

# For example purposes, looking up the details on a single entity requires obtaining authorization from IAM
findOne:
  handler: ./src/aws/AWSTeamController.findOne
  events:
    - http:
        path: /${self:custom.resourceName}/{id}
        method: get
        cors: true
        # authorization provided by IAm, rather than a custom authorizer or the COGNITO user pool
        # even though there is a Cognito UserPool at the bottom of this all
        # this AWS_IAM approach allows the restrictions to be based on a IAm Policy
        # and we previously handed out policies through our Cognito Identity Pool based on a user's
        # highest priority UserPool group; This also avoids a lambda execution in order to run a custom authorizer
        # authorizer:
        #  type: AWS_IAM   

# Generally, deleting items should always require authorization
deleteOne:
  handler: ./src/aws/AWSTeamController.deleteOne
  events:
    - http:
        path: /${self:custom.resourceName}/{id}
        method: delete
        cors: true
        # Again we use a AWS_IAM authorizer and not a Cognito User Pool or custom authorizer
        #authorizer:
        #  type: AWS_IAM

# Generally, updating items should always require authorization
update:
  handler: ./src/aws/AWSTeamController.update
  events:
    - http:
        path: /${self:custom.resourceName}/{id}
        method: put
        cors: true
        #authorizer:
        #  type: AWS_IAM
        # Listing a schema here will have AWS validate the request prior to calling our 
        # lambda, saving us execution costs for invalid requests; note that the schema for this
        # extremely simplistic model is the same as for creates, so we re-use the create request schema        
        #request:
        #  schemas:
        #    application/json: resource-create-model

# Generally, creating items should always require authorization
create:
  handler: ./src/aws/AWSTeamController.create
  events:
    - http:
        path: /${self:custom.resourceName}
        method: post
        cors: true
        #authorizer:
        #  type: AWS_IAM
        # Listing a schema here will have AWS validate the request prior to calling our 
        # lambda, saving us execution costs for invalid requests        
        #request:
        #  schemas:
        #    application/json: resource-create-model
Enter fullscreen mode Exit fullscreen mode

Note that we currently have our authorizers disabled, as well as our schema validators. We'll cover those plus the other various parameters listed above later. For now, its time to wrap up this post before it gets any longer!

Top comments (0)