DEV Community

Cover image for Building Serverless Python Apps with AWS Lambda and Docker
Rajdip Bhattacharya
Rajdip Bhattacharya

Posted on • Updated on

Building Serverless Python Apps with AWS Lambda and Docker

Greetings geeks!

Welcome to another blog on how to get your stuff done (the easy way)! In this blog, we will be taking a look at how to deploy dockerized Python code on AWS Lambda. Not only that, we will be setting up our AWS environment using Terraform to reduce the chances of blowing up our servers without our knowledge. So, let's get started!

Why Lambda?

AWS Lambda

This question is not uncommon. In case you have felt it as well, you have come to the right place! When we use Lambda functions in place of traditional EC2 servers, we are taking a big leap into the world of serverless computing. This technique enables us to focus only on the business logic of our product, while the cloud provider does the heavy-lifting of setting up the environment for us.

An use-case of Lambda

AWS Lambda has a special use case in the industry. Consider this simple example:

Lambda use case 1

This diagram demonstrates a caption generation API that does the following (The abstraction of intricate details is deliberate):

  1. Take in a video
  2. Store the video
  3. Generate its captions
  4. Store the caption along with the video

There is one design flaw in here. We might have a case that the Upload API might have authentication and authorization to filter out who can upload videos. But we can easily point out that, the video generation doesn't require any authentication, i.e, it can operate autonomously. This is where lambda can help us. We are also increasing the workload on Upload API in this flow. The next diagram shows the optimized design:

Lambda use case 2

As you can see, we have separated the caption generator to another component called Caption Generator. You can think of this as a component. It does the following:

  1. Retrieve the video from storage (eg- S3)
  2. Process the video and generate captions
  3. Return the captions.

Through this design, we achieved the following:

  • Implemented separation of concern and single responsibility principle.
  • Reduced workload on Upload API.

Now, you might be wondering, how do you actually host this Caption Generator. Well, you have your answer - AWS Lambda! Lambda functions are meant for exactly such use cases, where you can isolate autonomous functionalities into separate services and host them. The Upload API can then invoke the Caption Generator when it will need to.

Our roadmap

Implementing the above service can take quite some amount of coding. To keep this tutorial simple, we will implement a Hello World function for starters.

  • Create Python code for the Hello World function.
  • Dockerize the code.
  • Initialize Terraform
  • Create ECR.
  • Deploy our code.
  • Create Lambda function.
  • Test our function.
  • Cleaning up.

You can find the code for this blog in here.

Did you say Python?

Yes, Python! Perhaps one of the easiest languages for getting started. Lambda supports a lot of runtimes. Do check them out if you are interested. Here is a glimpse:

Lambda runtimes

First, we need to create a file named lambda_function.py. The naming convention is necessary since we will be using a base image that utilizes this name.

Short note on using containerized build.

Lambda supports three sources of code:

  • Single file code: used for smaller use cases such as testing and demonstration and is not often recommended for production environment.
  • Zipped up code from S3: Generally used along with pipelines where the artifacts from the build stage of the pipelines is uploaded into a S3 bucket and the lambda function uses this archive.
  • Containerized code from AWS ECR: Perhaps the most stable one, this gives you the flexibility to use custom runtimes with the help of Docker. The code is built into a docker image and is pushed into ECR which is then used by Lambda.

The base image that we will be using dictates us to name the entry file as lambda_function.py and the entry function as handler. Hence, the names.

# lambda_function.py

def handler(event, context):
    name = event["name"]

    return {"statusCode": 200, "message": f"Hello {name}!"}
Enter fullscreen mode Exit fullscreen mode

The code is pretty simple. It extracts a parameter called name from the lambda event and returns a JSON response. We will come to the event format at the end.

Docker up!

That's step one done. Next, we move towards dockerizing our code.

Create a file named Dockerfile in the same directory with the following contents:

# Dockerfile

FROM public.ecr.aws/lambda/python:3.10

# Copy function code
COPY lambda_function.py ${LAMBDA_TASK_ROOT}

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "lambda_function.handler" ]
Enter fullscreen mode Exit fullscreen mode

Initialize Terraform

For starters, Terraform is an open-sourced tool developed and maintained by HashiCorp that allows us to build our cloud infrastructure from code. This approach is popularly known as Infrastructure as Code(IaC). If you would like to read more, I suggest you go through their docs (and perhaps some YouTube?).

Before proceeding forward, make sure you have these prerequisites checked:

Now, we are ready to get started with terraform.

  • First, create a folder named terraform in the root folder of your project. This isn't a rule, but will allow you to properly organize your code files.

  • Next, create these files in the folder:

# terraform.tf 

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region     = var.region
  access_key = var.access_key
  secret_key = var.secret_key
}
Enter fullscreen mode Exit fullscreen mode

This file is used to hold all the configurations required by our project. To simplify, we will be using AWS, hence we mentioned aws in the required_providers block. Next, we set up our aws provider, where in we mention our AWS settings.

# variables.tf

variable "region" {
  default = "your_aws_region"
}

variable "access_key" {
  default = "your_aws_access_id"
}

variable "secret_key" {
  default = "your_aws_secret_key"
}
Enter fullscreen mode Exit fullscreen mode

This file contains the AWS option values. It is usually a good practice to extract all sensitive values from your terraform code into the variables.tf file and inject the values at runtime to enforce and maintain security of your code.

  • Once done, run the following:
terraform init
Enter fullscreen mode Exit fullscreen mode

This will initialize your project with the dependencies.

Create ECR

With the previous step done, we are ready to move into the actual stuff. Now, we will be creating our repository (AWS ECR). This will allow us to host the custom python image that we will be building shortly. Add this file inside the terraform folder that you created just a while back.

# ecr.tf

resource "aws_ecr_repository" "greetings_repository" {
  name                 = "greetings-repository"
  force_delete         = true
  image_tag_mutability = "MUTABLE"
}
Enter fullscreen mode Exit fullscreen mode

Note that the name of our repository is greetings-repository. You can change it if you want to.

After that, we will want to spin up our repository:

terraform apply
Enter fullscreen mode Exit fullscreen mode

create ecr repository

This command will create your ECR. You can check it from your AWS Console by going into Search > ECR > Repositories > greetings-repository

ECR

Deploying our code

Now that we have our repository set up, we are ready to build and push our code. To do this, click on the View push commands button on the top right of the repository page.
ECR Push commands

Go into your root directory of the project and run the commands one by one.

docker push

After running the commands, your output should look something similar to this. To confirm you have pushed the image, you can go into your repository in ECR and verify that it's there.
docker image

Create the Lambda function

Before we deploy our lambda function, we need to create a Lambda execution role that our lambda function will use. This allows the function to operate on its own.

# role.tf

data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "iam_for_lambda" {
  name               = "iam_for_lambda"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
Enter fullscreen mode Exit fullscreen mode

Here, we created a role called iam_for_lambda that we will attach to our lambda function.

Let's buckle up and get that function deployed.

# lambda.tf

resource "aws_lambda_function" "greetings_function" {
  function_name = "greetingsFunction"
  role          = aws_iam_role.iam_for_lambda.arn
  package_type  = "Image"
  timeout       = 10
  image_uri     = "${aws_ecr_repository.greetings_repository.repository_url}:latest"
}
Enter fullscreen mode Exit fullscreen mode

We did quite a few things in there, namely:

  • Create a lambda function named greetingsFunction
  • Attach the iam_for_lambda role that we just created
  • Specified that our lambda function will be using an image as its source.
  • Specified the greetings-repository url.
  • Specified a timeout for the function.

With that, we come to the finale. All we need to do now is, run:

terraform apply
Enter fullscreen mode Exit fullscreen mode

Create lambda function

After this, we should see our lambda function up and running! You can check it from your AWS Console via Search > Lambda > greetingsFunction

Lambda function

Testing the lambda function

In case you forgot, our lambda function is extracting a key called name from the event and returning us a JSON in the form of:

{
    "statusCode": 200,
    "message": "Hello <name>!"
}
Enter fullscreen mode Exit fullscreen mode

To test the function, we will be using the console. Head over to your lambda function and go into the Test tab. Scroll down to find a text editor. Paste the following content in there:

{
    "name": "John Doe"
}
Enter fullscreen mode Exit fullscreen mode

Lambda test

Once satisfied, click on the Test button. You should see the following content if your execution was successful:

Lambda test output

Congrats! You have successfully created a serverless, containerized python application! Don't be so harsh on yourself, give yourself a pat on the back!

Cleaning up

It's always a good idea to dispose off your resources when you don't need them. Gives us the ability to delete everything with just once command. So, when you feel that your are done playing with, run:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this blog, we had a deep dive into how you can set up your own container, deploy code to lambda, and create a fully managed AWS environment. In the next part, we will be looking at setting up a CI/CD pipeline using GitHub Actions that will allow us integrate changes into the lambda function at ease. Till then, happy hacking!

Top comments (1)

Collapse
 
kriptonian profile image
Sawan Bhattacharya

It's really helped me