DEV Community

Falcon
Falcon

Posted on

App Django on serverless using Cloud Run - Part 2

In our first post I created the backing services, now, we'll need to make some changes to the template project to suit. This will include using django-environ to use environment variables as your configuration settings, which we'll seed with the values you defined as secrets. To do this, we'll extend the template settings.

Configure settings

Find the generated settings.py file, and rename it to basesettings.py:

mv myproject/settings.py myproject/basesettings.py

Next, use the Cloud Shell web editor to open the file and replace the entire file's contents the following:

touch myproject/settings.py
cloudshell edit myproject/settings.py
myproject/settings.py
# Import the original settings from each template
from .basesettings import *

try:
    from .local import *
except ImportError:
    pass

# Pulling django-environ settings file, stored in Secret Manager
import environ
import os

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
env_file = os.path.join(BASE_DIR,  ".env")

SETTINGS_NAME = "application_settings"

if not os.path.isfile('.env'):
    import google.auth
    from google.cloud import secretmanager_v1beta1 as sm

    _, project = google.auth.default()

    if project:
        client = sm.SecretManagerServiceClient()
        path = client.secret_version_path(project, SETTINGS_NAME, "latest")
        payload = client.access_secret_version(path).payload.data.decode("UTF-8")

        with open(env_file, "w") as f:
            f.write(payload)

env = environ.Env()
env.read_env(env_file)

# Setting this value from django-environ
SECRET_KEY = env("SECRET_KEY")

# Could be more explicitly set (see "Improvements")
ALLOWED_HOSTS = ["*"]

# Default false. True allows default landing pages to be visible
DEBUG = env("DEBUG")

# Setting this value from django-environ
DATABASES = {"default": env.db()}

INSTALLED_APPS += ["storages"] # for django-storages
if "myproject" not in INSTALLED_APPS:
   INSTALLED_APPS += ["myproject"] # for custom data migration

# Define static storage via django-storages[google]
GS_BUCKET_NAME = env("GS_BUCKET_NAME")
STATICFILES_DIRS = []
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"

Take the time to note the commentary added about each configuration.

Python dependencies

Finally, add the following packages to the existing requirements.txt:

cloudshell edit requirements.txt
requirements.txt
gunicorn==20.0.4
psycopg2-binary==2.8.5
google-cloud-secret-manager==1.0.0
google-auth==1.20.1
django-storages[google]==1.9.1
django-environ==0.4.5

Containerize your app and upload it to Container Registry

Container Registry is a private container image registry that runs on Google Cloud. You'll use it to store your containerized project.

To containerize the template project, first, create a new file named Dockerfile in the top level of your project (in the same directory as manage.py), and copy the following content:

To containerize the template project, create a Dockerfile, and add the following content:

touch Dockerfile
cloudshell edit Dockerfile

Dockerfile

# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8-slim

ENV APP_HOME /app
WORKDIR $APP_HOME

# Install dependencies.
COPY requirements.txt .
RUN pip install -r requirements.txt

# Copy local code to the container image.
COPY . .

# Service must listen to $PORT environment variable.
# This default value facilitates local development.
ENV PORT 8080

# Setting this ensures print statements and log messages
# promptly appear in Cloud Logging.
ENV PYTHONUNBUFFERED TRUE

# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application

You can define any container image conforming to the Cloud Run Container Contract.

Now, build your container image using Cloud Build, by running the following command from the directory containing the Dockerfile:

gcloud builds submit --tag gcr.io/$PROJECT_ID/django-cloudrun

Cloud Build is a service that executes your builds on Google Cloud. When specifying a tag, it executes a series of build steps, where each build step is run in a Docker container to produce your application container (or other artifacts) and push it to Cloud Registry, all in one command.

Once pushed to the registry, you'll see a SUCCESS message containing the image name. The image is stored in Container Registry and can be re-used if desired.

You can list all the container images associated with your current project using this command:

gcloud container images list

Run the migration steps

To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your media assets, you need to run migrate and collect static.

These base Django migration commands need to be run within the context of your built container with access to your database.

You will also need to run create superuser to create an administrator account to log into the Django admin.

Allow access to components

For this step, we're going to use Cloud Build to run Django commands, so CLoud Run will need access to the Django configuration stored in Secret Manager.

As earlier, set the IAM policy to explicitly allow the Cloud Build service account access to the secret settings:

export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)')
export CLOUDBUILD=${PROJECTNUM}@cloudbuild.gserviceaccount.com

gcloud secrets add-iam-policy-binding application_settings \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor
Additionally, allow Cloud Build to connect to Cloud SQL in order to apply the database migrations:

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
    --member serviceAccount:${CLOUDBUILD} --role roles/cloudsql.client

Create your Django superuser

To create the superuser, you're going to use a data migration. This migration needs to be created in the migrations folder under my myproject.

Firstly, create the base folder structure:

mkdir myproject/migrations
touch myproject/migrations/__init__.py

Then, create the new migration, copying the following contents:

touch myproject/migrations/0001_createsuperuser.py
cloudshell edit myproject/migrations/0001_createsuperuser.py
myproject/migrations/0001_createsuperuser.py
from django.db import migrations

import google.auth
from google.cloud import secretmanager_v1beta1 as sm


def createsuperuser(apps, schema_editor):

    # Retrieve secret from Secret Manager 
    _, project = google.auth.default()
    client = sm.SecretManagerServiceClient()
    path = client.secret_version_path(project, "admin_password", "latest")
    admin_password = client.access_secret_version(path).payload.data.decode("UTF-8")

    # Create a new user using acquired password
    from django.contrib.auth.models import User
    User.objects.create_superuser("admin", password=admin_password)


class Migration(migrations.Migration):

    initial = True

    dependencies = [
    ]

    operations = [
        migrations.RunPython(createsuperuser)
    ]

Now back in the terminal, create the admin_password as within Secret Manager, and only allow it to be seen by Cloud Build:

gcloud secrets create admin_password --replication-policy automatic

admin_password="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)"

echo -n "${admin_password}" | gcloud secrets versions add admin_password --data-file=-

gcloud secrets add-iam-policy-binding admin_password \
  --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor

Create the migration configuration

Next, create the following Cloud Build configuration file:

touch cloudmigrate.yaml
cloudshell edit cloudmigrate.yaml
cloudmigrate.yaml
steps:
- name: "gcr.io/cloud-builders/docker"
  args: ["build", "-t", "gcr.io/${PROJECT_ID}/django-cloudrun", "."]

- name: "gcr.io/cloud-builders/docker"
  args: ["push", "gcr.io/${PROJECT_ID}/django-cloudrun"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/django-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "migrate"]

- name: "gcr.io/google-appengine/exec-wrapper"
  args: ["-i", "gcr.io/$PROJECT_ID/django-cloudrun",
         "-s", "${PROJECT_ID}:${_REGION}:myinstance",
         "--", "python", "manage.py", "collectstatic", "--no-input"]

Run the migration

Finally, run all the initial migrations through Cloud Build:

gcloud builds submit --config cloudmigrate.yaml \
    --substitutions _REGION=$REGION

Deploy to Cloud Run

With the backing services created and populated, you can now create the Cloud Run service to access them.

The initial deployment of your containerized application to Cloud Run is created using the following command:

gcloud run deploy django-cloudrun --platform managed --region $REGION \
  --image gcr.io/$PROJECT_ID/django-cloudrun \
  --add-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \
  --allow-unauthenticated

Wait a few moments until the deployment is complete. On success, the command line displays the service URL:

Service [django-cloudrun] revision [django-cloudrun-...] has been deployed
and is serving traffic at https://django-cloudrun-...-uc.a.run.app

You can also retrieve the service URL with this command:

gcloud run services describe django-cloudrun \
  --platform managed \
  --region $REGION  \
  --format "value(status.url)"

You can now visit your deployed container by opening this URL in a web browser:
Alt Text

You can also log into the Django admin interface (add /admin to the URL) with the username "admin" and the admin password, which you can retrieve using the following command:

gcloud secrets versions access latest --secret admin_password && echo ""

Alt Text

Alt Text

Conclusions

We have just deployed a complex project to Cloud Run!

  • Cloud Run automatically and horizontally scales your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.
  • Cloud SQL allows you to provision a managed PostgreSQL instance that is maintained automatically for you, and integrates natively into many Google Cloud systems.
  • Cloud Storage lets you have cloud storage in a way that is accessible seamlessly in Django.
  • Secrets Manager allows you to store secrets, and have them accessible by certain parts of Google Cloud and not others.

Ah, don't forget to clean up all the resources in your GCP project in order to don't be charged $$$$.

Regards.

Top comments (0)