DEV Community

Cover image for Anatomy of a Serverless YAML File
Richard Keller
Richard Keller

Posted on • Originally published at blog.richardkeller.net

Anatomy of a Serverless YAML File

Within the last few years, creating a serverless infrastructure has become more accessible than ever before. With AWS, Azure, and Google Cloud platforms, developers and engineers have almost limitless capabilities when deploying websites and applications. Serverless Framework, a popular framework for creating serverless architecture, has made it even easier to utilize the biggest cloud computing platforms available. They have done this by abstracting the cloud architecture configuration to a more straightforward form using YAML. In this post, we’ll look at the serverless.yml file in more detail to see how it is used to set up a serverless architecture.

Serverless.yml

The serverless.yml is the heart of a serverless application. This file describes the entire application infrastructure, all the way from the programming language to resource access.

The primary section of this YAML file is the provider. In serverless.yml configuration, you have the option to use AWS, Google Cloud, or Microsoft Azure as your serverless provider. You can also specify the programming language you want to use. In the example below, I specify that I am going to use AWS with Python 3.7.

Example serverless.yml:

service: my-first-serverless-app
provider:
  name: aws
  runtime: python3.7

The serverless.yml file above is the foundation of what could be unlimited computing power, hundreds of serverless functions, and a database infrastructure that powers your website or application. Let’s dive deeper into this configuration. I will be using AWS and python for my base example.

Environment Variables

Setting up environment variables to use in your application is really easy with the serverless.yml file. Under the provider, you add an environment section and a list of key values. Taking our example from above a step further, let’s add an environment variable that specifies a table name for a user table.

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: users_table

Now in any serverless function we use, the environment variables USER_TABLE is available for us to use. We can also use this variable in other parts of our serverless.yml file to name resources or prefix HTTP paths.

To use this environment variable in the serverless.yml later on, use the following syntax. Notice that it is similar to traversing an object in other programming languages.

${self:provider.environment.USER_TABLE}

Sourcing Other Files

Often you need to separate configuration for your application into separate files. Serverless Framework has a really nice syntax for including variables from other files. Let’s say for example that you want the user’s table to be named dev_users_table for a development branch. There are many ways to this, but to illustrate sourcing other files, I’ll show you one way.

You could have multiple config files that are used for specific environments. Let’s say you have two YAML config files in your projects root folder:

Example dev.config.yml:

table_name: 'dev_users_table'

Example prod.config.yml:

table_name: 'prod_users_table'

Your production application would use prod.config.yml. To source this prod file, you could use the following syntax:

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: ${file(./prod.config.yml):table_name}

But wait, how do you deploy this application on dev? You have to change the file to specify dev right? No, we can update this config to utilize the stage variable. The stage variable is a special variable in the Serverless Framework that can be used to specify which environment you are using. By default, the stage variable is dev.

Stage Variables

Let’s fix our serverless.yml so that we can use the dev config without updating the file manually. To do this, we use ${opt:stage, self:provider.stage}.

Example serverless.yml with stage variable:

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: ${opt:stage, self:provider.stage}_users_table

Now that we are using the stage variable. We don’t need the prod.config.yml file now. We can easily prefix all our services with the stage variable so that when we deploy our application to test a new service or feature, we don’t affect the production services.

If we did want to use separate config files we could use our stage variable to separate the configuration into separate files like this:

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: ${file(./${opt:stage, self:provider.stage}.config.yml):table_name}

Separating out your environment variables into files is a great way to organize bigger applications that use many resources such as S3 buckets, database tables, API version paths, etc.

IAM Role Statements

If you have used AWS services, you have probably had to create an IAM user or a role before. With serverless, you can give your application permissions to utilize resources via the serverless.yml file. The permission configuration is basically AWS Cloudformation written in YAML. The permissions go under the iamRoleStatements section under the provider.

Example giving your application S3 Access:

If you have an S3 bucket named awesome-bucket-name. The following example will give your application full access (Read, Write, List, etc) to the specified bucket.

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: users_table 
 iamRoleStatements:
    -  Effect: "Allow"
       Action:
         - "s3:*"
       Resource:
          - arn:aws:s3:::awesome-bucket-name

Example giving your application SES:

If you have an application that needs to send emails. You can give it access to Amazon SES like this: (Note: you need to setup SES with a verified domain separately.) The following example gives your application access to use all SES resources to send emails.

provider:
  name: aws
  runtime: python3.7
  environment:
    USER_TABLE: users_table 
 iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "ses:SendEmail"
      Resource: "*"

Functions

I’ve been writing about this mysterious “application”, but I haven’t explained what that is. In your serverless.yml file, you configure functions to be created on your serverless provider’s platform. For AWS, you can tell serverless to create Lambda functions that can be routed to HTTP endpoints. Let’s take a quick look at how to define an HTTP endpoint via the serverless.yml file.

Without functions, you do not actually have an application. Remember the programming language we specified in the provider? That tells our provider which programming language to create our functions with. (Note: Programming languages are determined by the provider. You cannot pick any language.)

As an example, let’s create an HTTP endpoint that returns the md5 hash of an email address, along with the Gravatar URL for that email.

To define a function we need two things:

  1. A file with the function called getHash we’ll call the file handler.py
  2. A serverless.yml function definition

Example of the serverless.yml definition:

functions:
  myCustomFunctionName:
    handler: handler.getHash

On the first line, we have functions. This is a section of the YAML file that lives under the provider section. In the functions section, we have myCustomFunctionName. This name will be the Lambda function name in AWS Lambda, not the one used in the code.

Next, we define the handler. The handler is the path to the function using dot notation. This dot notation is similar to how imports work in Python. handler is the file name and getHash is the function name in that file.

Here is our handler.py:

import hashlib
import json
CORS = {
    'Access-Control-Allow-Origin': '*',
    'Access-Control-Allow-Credentials': True,
}
def getHash(event, context):
    string = event['pathParameters']['string']
    h = hashlib.md5(string.encode('utf8'))
    h = h.hexdigest()
    return {
        "statusCode": 200,
        "headers": CORS,
        "body": json.dumps({
            "hash": h,
            "avatar": "http://gravatar.com/avatar/{}".format(h)
        })
    }

With the file defined and the function named getHash inside, our serverless.yml is ready to create our function on AWS Lambda. Before we deploy this, we need to enable access to this function via HTTP. To make this an HTTP endpoint, let’s add an events section.

Example functions with events:

functions:
  myCustomFunctionName:
    handler: handler.getHash
    events:
      - http:
          path: md5/{email}
          method: get
          cors: true

The event section ties our function to the API Gateway provided by AWS. In this section, we specify the path and the method of the HTTP endpoint. With this event configuration, you will be able to access your Lambda function via the path /md5/some@email.com. The response would be a JSON object with the md5 hash and a URL to the gravatar image.

Example output with my email:

{
  "hash": "97fbef641c6e4669b6a1ad4ffb3342f5",
  "avatar": "http://gravatar.com/avatar/97fbef641c6e4669b6a1ad4ffb3342f5"
}

You can go even further with this by requiring authentication and rate-limiting this endpoint. More from the docs.

Now that we have our events, we can send get requests to this function from a frontend application and it will run our serverless function.

Conclusion

This post scratches the surface of the serverless.yml file. I hope you understand a little more about serverless and dive deeper into this framework and technology. Serverless Framework, along with the cloud platforms available, empower people to create scalable products at a low cost. Building the infrastructure for an application is much easier than ever before and can be created by using a simple configuration file like serverless.yml.

Thanks for reading!

Top comments (0)