DEV Community

Cover image for Handling SQLite DB in Lambda Functions Using Zappa
Timilehin Aliyu
Timilehin Aliyu

Posted on

Handling SQLite DB in Lambda Functions Using Zappa

Using SQLite is rather straightforward for managing quick services, but when it gets scalable and a point of data integrity, RDBMS should be the choice at hand. While Relational Database Management Systems (RDBMS) like PostgreSQL or MySQL are the go-to choices for production-grade applications, there are scenarios where SQLite can be a viable and convenient option, especially during development or for small-scale projects. Moreover, when deploying a Django application to a Lambda function, it gets tedious to add secret keys just to test a simple app. You'd agree that working with SQLite would be a better option.

Lambda functions are designed to be stateless and ephemeral, making it challenging to use SQLite out of the box. Maybe it's quite straightforward and does support older versions easily, but along the line, you'd possibly run into issues such as deterministic=True which requires a specific version of SQLite. However, with few additional steps, you can seamlessly integrate SQLite with your Django project in Lambda, allowing you to focus on developing and testing your application without the overhead of setting up a full-fledged database.

When deploying Django applications to AWS Lambda functions, the ephemeral nature of the Lambda environment can pose challenges for maintaining persistent data storage. While it's possible to use RDBMS solutions with Lambda, it often requires additional configuration and setup, including managing database credentials, connection strings, and potentially adding external database instances. By using SQLite, you can simplify the deployment process and avoid the complexities associated with traditional RDBMS setups

Issues I came across while trying to use SQLite in Lambda

loss of data once the function execution ends.

Version Requirements: Some advanced features of SQLite, such as deterministic=True in functions, require specific versions of SQLite which may not be available in the Lambda environment.

Using SQLite with Django in Lambda

We can deploy our project using the following

To overcome the limitations of SQLite in Lambda, we can use a package called django_s3_sqlite. This allows us to store our SQLite database in an S3 bucket, enabling persistent storage beyond the ephemeral life of a Lambda function.

Install the django_s3_sqlite package:

pip install django_s3_sqlite
Enter fullscreen mode Exit fullscreen mode

Update your Django settings:

In your settings.py file, update the DATABASES setting to use the django_s3_sqlite engine, and specify the S3 bucket and file name where your SQLite database will be stored:

DATABASES = {
    'default': {
        'ENGINE': 'django_s3_sqlite',
        'NAME': f'{s3_file_name}.db',
        'BUCKET': f'{s3_bucket_name}',
    }
}
Enter fullscreen mode Exit fullscreen mode

This configuration allows your Django application to use an SQLite database stored in an S3 bucket, ensuring persistent data storage across Lambda function invocations. Now let's set up Zappa

Setting Up Zappa

Zappa makes it easy to deploy Python applications, including Django projects, to AWS Lambda.

Install Zappa

pip install zappa
Enter fullscreen mode Exit fullscreen mode

Create a zappa_settings.json file with the appropriate Zappa configuration for your project. Here's an example:

{
  "production": {
    "django_settings": "project_dir.settings",
    "aws_region": "us-east-1",
    "role_name": "zappa-control",
    "role_arn": "your_arn_with_needed_access",
    "manage_roles": false,
    "project_name": "project_name",
    "runtime": "python3.8", # note this version
    "s3_bucket": "deploy_bucket",
    "use_precompiled_packages": false
  }
}
Enter fullscreen mode Exit fullscreen mode

Note: We use use_precompiled_packages set to false to avoid potential compatibility issues with Lambda's environment. This way we can include our own SQlite binary file.

Handling SQLite Binary in Lambda

Given the earlier identified condition of Lambda functions being short-lived, we need to download the SQLite binary to run in our Lambda function. This ensures the required SQLite version is available. You can download the appropriate binary file for your Python version from the official SQLite website or from a trusted source.

Once you have the binary file, upload it to your S3 bucket for easy access during deployment.

aws s3 cp s3://{S3_BUCKET}/_sqlite3.so /var/task/_sqlite3.so
Enter fullscreen mode Exit fullscreen mode

Note: The path /var/task/ is the root of your Lambda function's execution environment. However, you can also use the project directory you will be pushing to Lambda:

aws s3 cp s3://{S3_BUCKET}/_sqlite3.so {LOCAL_PROJECT_ROOT}/_sqlite3.so
Enter fullscreen mode Exit fullscreen mode

Note we are using the cli here, while it might pose some challenge to run cli commands from your code into the lamda function you can also utilize boto3 and write the code to do that, but this might require extra information. But an easier way to run cli commands in lambda function is using the AWS Lambda Layers function.

Also, here's a python function using the subprocess module

import subprocess
def download_sqlite_binary():
    subprocess.run(['aws', 's3', 'cp', 's3://{S3_BUCKET}/_sqlite3.so', '/var/task/_sqlite3.so'])
Enter fullscreen mode Exit fullscreen mode

Then, deploy your Django application to AWS Lambda using Zappa:

zappa deploy production
Enter fullscreen mode Exit fullscreen mode

Voila. Once successful you should see the url provided for the deployed function. And if you test your application you'd notice your data being persistent. Nice Right.

While using SQLite with Django on AWS Lambda offers convenience and simplicity, it's important to consider the limitations and potential trade-offs such as data integrity, scalability, and Lambda limitations. Integrating SQLite with Django on AWS Lambda using Zappa can be a convenient and efficient solution for development, testing, or small-scale projects. However, reading from an S3 file in your Lambda function incurs additional costs. AWS S3 currently offers a free tier that includes 5GB of storage and allows 200,000 GET requests and 2,000 PUT or PATCH requests. With these pricing considerations, for small projects, the overhead is typically minimal compared to the benefits of simplicity and ease of setup.

Top comments (0)