DEV Community

Cover image for build your own FAAS provider
Safi
Safi

Posted on • Updated on

build your own FAAS provider

slsdz faas schema

What is this ?

is a command-line interface (CLI) tool and serverless FAAS for creating serverless applications with random subdomains. It simplifies the process of setting up serverless functions using AWS services.

GitHub logo apotox / slsdz

build your own serverless FAAS provider on AWS

SLSDZ - build your own FAAS provider

npm version

slsdz schema

Description

slsdz is a command-line interface (CLI) tool and serverless FAAS for creating serverless applications with random subdomains. It simplifies the process of setting up serverless functions using AWS services.

video.mp4

Installation

You can try the cli:

yarn install
yarn bundle
cd demo
./mycli.js --help
# Note: mycli.js is a shortlink to the bundled CLI file cli.js
Enter fullscreen mode Exit fullscreen mode

Options

  • --help Show help
  • --version Show version number
  • --id function id
  • --secret function secret
  • --init init new function (id and secret)
  • -f, --zip upload a function package (.zip)
  • -v, --verbose verbose level 0,1,2
  • --log function log
  • --about about

Note: The --id and --secret options are required. you can initialize new function using the --init option. also you can pass these two options as an env variables SFUNCTION_ID and SFUNCTION_SECRET.

Examples

// simple index.js function
module.exports.handler = async (event, context
Enter fullscreen mode Exit fullscreen mode

this project is for learning and see how to use Terraform and AWS to build a serverless solution, its not the perfect way to build FAAS specialy for the cost, calling one AWS Lambda function from another Lambda function, often referred to as "double-calling," can indeed impact the cost and performance of your serverless application. When you invoke one Lambda function from another, you incur additional execution time, network latency, and potential resource usage, which can lead to increased costs and slower response times.

Installation

You can use slsdz using npx:

yarn install
yarn bundle
cd demo
./mycli.js --help
Enter fullscreen mode Exit fullscreen mode

Options

  • --help Show help
  • --version Show version number
  • --id function id
  • --secret function secret
  • --init init new function (id and secret)
  • -f, --zip upload a function package (.zip)
  • -v, --verbose verbose level 0,1,2
  • --log function log
  • --about about

Note: The --id and --secret options are required. you can initialize new function using the --init option. also you can pass these two options as an env variables SFUNCTION_ID and SFUNCTION_SECRET.

Examples

// simple index.js function
module.exports.handler = async (event, context) => {
    return {
        statusCode: 200,
        body: "Hello World",
    };
};
Enter fullscreen mode Exit fullscreen mode
# bash
zip function.zip index.js
./mycli.js --init
# a .slsdz file will be generated which contains the function credentials
./mycli.js --zip function.zip


3:27:27 PM - API_URL:  your function URL https://ABCDEF.safidev.de

ℹ file size: 283 bytes
3:27:29 PM - UPLOAD_STATUS:  200

✔ uploading [/Users/mac/test/function.zip]

Enter fullscreen mode Exit fullscreen mode

How it works

AWS services:

  • Lambda (functions)
  • ApiGateway (handle http requests)
  • S3 (store functions codes)
  • CloudFormation (create users functions)
  • CloudWatch (logs)
  • EventBridge (trigger function to create CNAME records)

external services:

  • Cloudflare

to interact with the service, developers use a CLI tool called slsdz without the need for any AWS-related authentication.

Each user can initialize a function and receive an ID and a secret. The secret acts as an ID signature, which is used for uploading or updating a function. These data are saved in a local file called .slsdz.

The slsdz CLI communicates with the serverless backend, where functions can be created, updated, and their logs can be retrieved. This backend utilizes API Gateway with Lambda integrations to manage the interactions.


When a user uploads function code, a Lambda function called Signer comes into play. The Signer generates a signed URL https://abc.users-functions.aws.../function-id.zip that allows users to upload the function code to an S3 bucket.

I intentionally included "function-id.zip" in the URL so that I can utilize it for the subsequent task of managing the S3 ObjectPut event.

The S3 bucket is configured to trigger another function called Deployer when an ObjectPut event occurs. The Deployer function reads the uploaded zip file, and it expects the zip file to be named like function-id.zip , where "function-id" represents the unique ID of the function. This naming convention allows the Deployer lambda function to determine which function should be updated/created.

if the function is new (doesnt exists) the Deployer function will build a cloudformation template that has all the required parameters to create:

  • add new api mapping
  • add new custom domain to apigateway custom domains
  • a new lambda function with basic AMI role.
resource "aws_iam_role" "user_lambda_role" {
  name               = "basic-lambda-role-${local.stage_name}"
  assume_role_policy = <<EOF
{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Action": "sts:AssumeRole",
           "Principal": {
               "Service": "lambda.amazonaws.com"
           },
           "Effect": "Allow"
       }
   ]
}
 EOF
}
Enter fullscreen mode Exit fullscreen mode

stack name has function-id , so we can extract it easly when handling the creation event in cname lambda function.

const stackName = `sls-stack-${functionId}-${Date.now()}`;
Enter fullscreen mode Exit fullscreen mode

in AWS console it will looks like:

stack creation

When the CloudFormation stack successfully creates the resources, it publishes a "Stack Creation Event" to Amazon EventBridge.

Where there is a Lambda function called cname that is subscribed to the "Stack Creation Event" in EventBridge.

The cname Lambda function represent an integration with the Cloudflare API. It uses the "functionId" from the received EventBridge event to create a new CNAME record on Cloudflare.

After the Lambda function is triggered and successfully creates the CNAME record through the Cloudflare API, you will see a new record added to your Cloudflare dashboard with the same "functionId" that was used in the Lambda function.

cloudflare records

Logs

user can find a function's logs using the CLI by passing --log option, i have chosen to use AWS CloudWatch as the primary logging service.

AWS CloudWatch is a centralized monitoring and logging service provided by Amazon Web Services, It allows us to collect, store, and analyze logs from various AWS resources and applications in one place.

The logging process involves a lambda function named logger which facilitates secure access to AWS CloudWatch logs associated with each function by its unique Id.

const logGroupName = `/aws/lambda/${buildFunctionName(id)}`;

    const result = await getCwClient()
        .describeLogStreams({
            logGroupName,
            orderBy: "LastEventTime",
            descending: true,
            limit: 1,
        })
        .promise();
Enter fullscreen mode Exit fullscreen mode

getCwClient is a CloudWatchLogs instance


Project Structure

The project structure consists of three main components: sls-lambda, sls-cli, and infra. all of which have been implemented using TypeScript and Terraform for Infrastructure as Code (IaC).

Yarn workspaces are utilized to facilitate the sharing of common dependencies between projects and enable running the test and bundle commands from a unified location.

Project:
-- sls-lambda
-- sls-cli
-- infra

sls-lambda:
This part of the project comprises separate folders, each representing a lambda function. Each folder contains an index.ts file that serves as the entrypoint or function handler for that specific lambda function.

sls-cli:
The sls-cli component is implemented using Yargs, a popular command-line argument parser. This module provides the Command-Line Interface (CLI) for the project, allowing users to interact with the serverless FaaS provider and deploy their functions.

infra:

infra folder

The infra directory holds the Terraform code for managing the Infrastructure as Code (IaC). Specifically, the dev subdirectory contains all the required resources for the development environment. Each lambda function has its dedicated tf file, which begins with the prefix fn-, making it easy to locate and manage individual functions' infrastructure.


inside this infra directory you can see also the data.tf which contain the part of zipping function bundles and generate release.zip files, and also this tf file define all the required Policies (aws_iam_policy resources).

locals {
  functions = [
    "cname",
    "deployer",
    "signer",
    "logger",
    "sls_proxy"
  ]
}

locals {
  lambda_files = {
    for fn in local.functions : fn => {
      source_file = "${path.module}/../../dist/${fn}/bundle.js"
      output_path = "${path.module}/../../release/${fn}.zip"
    }
  }
}

data "archive_file" "zipped" {
  for_each = local.lambda_files
  depends_on = [
    null_resource.bundle
  ]
  type        = "zip"
  source_file = each.value.source_file
  output_path = each.value.output_path
}
Enter fullscreen mode Exit fullscreen mode

each policy resource definition is declared in a json template inside /templates. by using templatefile function it can populate the policy attributes from json content.


for example in the fs-cname.tf file, you found the usage of a data source called zipped in the aws_lambda_function resource.

resource "aws_lambda_function" "cname" {
  depends_on = [
    null_resource.bundle
  ]
  function_name                  = "${var.project_name}-${var.stage_name}-cname"
  filename                       = data.archive_file.zipped["cname"].output_path
Enter fullscreen mode Exit fullscreen mode

Deploy on your own aws account

git clone it
yarn install
Enter fullscreen mode Exit fullscreen mode
# before deploying you need to set up some variables,
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export TF_VAR_custom_api_domain_name=api.example.com
export TF_VAR_signing_secret= # used to sign the function id
export TF_VAR_cloudflare_zone_id=
export TF_VAR_cloudflare_email=
export TF_VAR_cloudflare_api_key=
export TF_VAR_certificate_arn= #https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html
Enter fullscreen mode Exit fullscreen mode

and you need to add your own domain name to cloudflare example "api.example.de" and create new CNAME record point to your apigateway.

cname record

# edit `infra/globals/s3/main.tf` and comment the S3 backend, we need to do this onetime because the s3 bucket doesnt exists yet. it will be created in the next step.

# terraform {
#   backend "s3" {
#     bucket         = "sls-lambda-terraform-state"
#     key            = "global/s3/terraform.tfstate"
#     region         = "us-east-1"
#     dynamodb_table = "sls-lambda-terraform-locks"
#     encrypt        = true
#   }
# }
Enter fullscreen mode Exit fullscreen mode
# now run
terraform --chdir infra/globals/s3 init
terraform --chdir infra/globals/s3 apply --auto-approve
Enter fullscreen mode Exit fullscreen mode
# edit `infra/globals/s3/main.tf` and uncomment the S3 backend section

terraform {
backend "s3" {
bucket         = "sls-lambda-terraform-state"
key            = "global/s3/terraform.tfstate"
region         = "us-east-1"
dynamodb_table = "sls-lambda-terraform-locks"
encrypt        = true
}
}
Enter fullscreen mode Exit fullscreen mode
# now we can enable remote backend by running again init and apply
terraform --chdir infra/globals/s3 init
terraform --chdir infra/globals/s3 apply --auto-approve
Enter fullscreen mode Exit fullscreen mode

in infra/dev

# infra/dev contains all files required to setup an envirenmnt. in infra/dev/main.tf you can see backend section has a diffrent key = "dev/terraform.tfstate". to deploy diffrent env stage for example 'prod' you can copy the dev folder and name it prod and set the key to "prod/terraform.tfstate".

terraform {
   backend "s3" {
     bucket         = "sls-lambda-terraform-state"
     key            = "dev/terraform.tfstate"  <-- this should be unique for each dev envirenmnt
     region         = "us-east-1"
     dynamodb_table = "sls-lambda-terraform-locks"
     encrypt        = true
   }
}

terraform --chdir infra/dev init
terraform --chdir infra/dev apply --auto-approve
Enter fullscreen mode Exit fullscreen mode

Update CLI consts

# slsdz-cli/src/consts.ts
export const SLSDZ_CONFIG_FILENAME = ".slsdz";
export const SLSDZ_DOMAIN_NAME = "example.de";
export const SLSDZ_API_BASE_URL =
    process.env.SLSDZ_API_BASE_URL || "https://api.example.de";

Enter fullscreen mode Exit fullscreen mode

Summary

slsdz means serverless and dz is a country abbreviation that most of Algerian developers love to add to their project so i did ✨

this small project is a developer-friendly, serverless FaaS provider that simplifies the deployment of functions to AWS accounts using a CLI tool and a secure signature-based authentication method.

The deployment process involves utilizing a signer function to generate a signed URL for S3 uploads, which then triggers a deployer function to read and update the respective function based on the provided ID in the zip file name.

project still needs lot of work (ex: Monitoring,Logging) to be a prod ready but i think its not that hard specially with AWS services and integrations. if you have any additional ideas or question just ping me.

Top comments (0)