This article is brought to you by the first deployment service, dedicated specifically to Python & Django Appliku.com.
Never manage a server again.
Deploy your Django app in 5 minutes.
In this article:
Django Tutorial Source Code
Create the environment for the project
Requirements for Django Project
.gitignore for Django Project
Dockerfile
docker-compose.yml
Explanation about Django and Docker
Django Settings
Django Custom User Model
Procfile for Django Project
Structure of Django Project
Push your Django Application to GitHub
Deploying Django Project
Application Processes Section
Config Variables Section
Heroku Config Vars Sync
Databases
Django Tutorial Source Code
You can find the project source code here: https://github.com/appliku/django_appliku_tutorial
Create the environment for the project
Let's make a directory in our home directory to hold our virtual environments for different projects.
Then we'll create an environment in it, activate it, install Django and then create our new project.
mkdir -p ~/envs
python3 -m venv ~/envs/tutorial
source ~/envs/tutorial/bin/activate
pip install -U pip
pip install Django
I prefer to keep all my code directories in ~/src
directory. Let's make one if you don't have it and switch to it.
mkdir -p ~/src
cd ~/src/
django-admin startproject tutorial
cd tutorial
At this stage I usually open my favorite IDE: PyCharm.
open -a pycharm .
Now we need to create files in the root of our project.
requirements.txt
will hold all dependencies our project needs.
.gitignore
will tell git what files should not be added to the repository.
Requirements for Django Project
Open the create and open requirements.txt
and put these lines in the file:
Django==3.1.7
Pillow==7.2.0
gunicorn==20.0.4
requests==2.25.1
django-redis==4.12.1
pytz==2021.1
psycopg2-binary==2.8.6
arrow==1.0.3
djangorestframework==3.12.2
djangorestframework-simplejwt==4.6.0
django-allauth==0.44.0
django-environ==0.4.5
django-storages==1.11.1
django-cors-headers==3.7.0
django-braces==1.14.0
django-extensions==3.1.1
django-post-office==3.5.3
django-crispy-forms==1.11.1
boto3==1.17.22
boto3-stubs==1.17.22.0
django-import-export==2.5.0
honeybadger==0.4.2
django-ses==1.0.3
djangoql==0.14.3
flake8==3.8.4
whitenoise==5.2.0
Couple words about these requirements and versions.
These are the packages I need in almost all of my projects, so I suggest including them in the tutorial project.
You may have question about old Pillow library. I had some issues where more fresh versions were incompatible with Django when uploading images and the only solution I found was downgrading to 7.2.0
.gitignore for Django Project
That's the most vital records for .gitignore
file.
env/
venv/
.idea
.env
**/__pycache__/
.DS_Store
My OS is Mac so .DS_Store
is the file from Finder that I don't want to get in the repository.
env
and venv
are typical names for environments that are created inside project directory. We don't have it now, but when you clone project on another machine or other developer will join the project, they would expect such names to be ignored.
.idea
is directory that PyCharm creates to store project specific settings.
.env
is local environment variables and we never should include it in repository.
__pycache__
is "compiled" bytecode version of your .py
files. You interpreter may create it. Having them in repository is bad because they will cause problems when you try to run the app on even slightly different version of Python.
Dockerfile
We will run our project with Docker.
Why?
Because you really want reproducible environment and avoid messing up with host machine.
In order to do that, let's create 2 files.
First, Dockerfile
:
FROM python:3.8
ENV PIP_NO_CACHE_DIR off
ENV PIP_DISABLE_PIP_VERSION_CHECK on
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 0
ENV COLUMNS 80
RUN apt-get update \
&& apt-get install -y --force-yes \
nano python-pip gettext chrpath libssl-dev libxft-dev \
libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev\
&& rm -rf /var/lib/apt/lists/*
WORKDIR /code/
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
docker-compose.yml
Second, docker-compose.yml
:
version: '3.3'
services:
redis:
image: redis
command: redis-server
ports:
- "14000:6379"
db:
image: postgres
environment:
- POSTGRES_USER=tutorial
- POSTGRES_PASSWORD=tutorial
- POSTGRES_DB=tutorial
ports:
- "127.0.0.1:21003:5432"
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8600
env_file:
- .env
ports:
- "127.0.0.1:8600:8600"
volumes:
- .:/code
links:
- db
- redis
depends_on:
- db
- redis
Explanation about Django and Docker
Dockerfile
defines the state of the OS and filesystem where your application will be executing. That's oversimplified and completely non-nerdy explanation :)
With instruction from Dockerfile
docker builds images.
If Dockerfile
is a set of instructions, image is the actual archive with files that can be used to execute apps in containers.
For developer environment we need Django development server to run with a certain command and we need a postgres DB and a Redis instance.
To define it with a code we will use docker-compose.yml
file.
docker-compose.yml
defines services it will run.
A service in docker-compose.yml
is defined primarily by the image it uses, a command to execute, ports to expose, volumes to mount and environment variables to set.
Again, it is a very high-level explanation with my own words and attempt to explain with as simple words as possible.
Let's talk about what we defined in our docker-compose.yml
.
We have a postgres service. When it is first initialized it will have a user, password and a database "tutorial".
It will be available for other service on the DNS name db
on the port 5432. And it will be available for the host machine on port 21003.
It uses "postgres" image, that it will pull from Docker Hub.
There is no volume for the database defined, so if you kill the DB then you will loose the data.
Next service is for our Django development server.
Instead of image, we specify folder where to build image from Dockerfile
, which in this case is current directory (.
).
If it fails, we want it to be always restarting.
We specify what command to use to run our dev server.
Environment variables will be taken from .env
file that we'll create later.
We expose port 8600 on 127.0.0.1, so it will be accessable only from local machine. Keep in mind that if you want to change port you should also update the port in the command
.
The volumes
section tells what directories will be mounted from the host machine inside the container. We want the current directory to be mounted on /code
where our app is running. See WORKDIR
in our Dockerfile
.
Since it is the development environment, we want changes to our code reflected in the container so dev server would reload automatically.
links
section will make resolution of DNS name db
possible inside container. In other words django dev server will be able to connect to db
. Same for redis
.
depends_on
section lists services that must be started before starting the web
service. In this case redis
and db
will be started first, then web
will be started.
Last step, let's create the .env
file in the root of the project.
DATABASE_URL=postgresql://tutorial:tutorial@db/tutorial
REDIS_URL=redis://redis/0
DJANGO_SECRET_KEY=supersecret123!
DJANGO_DEBUG=True
Here in form of special URLs we pass to our django project credentials for database and redis instance, secret_key Django should use and enable debug mode for Django.
None of it affects our app yet. But it will be very important in a little bit.
We need to test that our Docker image can be built and docker-compose has no errors.
For now, let's just tell it to build our image.
Run this command:
docker-compose build
In case of success, last lines of the output should be roughly like these:
Removing intermediate container 757d0bd934ca
---> b4bba357f84c
Step 11/11 : COPY . /code/
---> fa5d799d8fc1
Successfully built fa5d799d8fc1
Successfully tagged tutorial_web:latest
Great job!
Time to work on settings.
Django Settings
We want our apps to be scalable, work under a lot of traffic and handle growth.
In order to do that we need to build our app so it allows scaling.
In this case it is important that our app follows rules of The 12-factor app: https://12factor.net
Let's list the key points here:
- Codebase – One codebase tracked in revision control, many deploys
- Dependencies – Explicitly declare and isolate dependencies
- Config – Store config in the environment
- Backing services – Treat backing services as attached resources
- Build, release, run – Strictly separate build and run stages
- Processes – Execute the app as one or more stateless processes
- Port binding – Export services via port binding
- Concurrency – Scale out via the process model
- Disposability – Maximize robustness with fast startup and graceful shutdown
- Dev/prod parity – Keep development, staging, and production as similar as possible
- Logs – Treat logs as event streams
- Admin processes – Run admin/management tasks as one-off processes
I strongly recommend to read all pages on that site.
With that in mind let's open our settings file: tutorial/settings.py
We remove everything from there and start building our own from scratch.
I will explain every code block then put the show the whole file so you can just copy it to your project.
First we import a couple of libraries, set our root path of the project and create an env variable.
from pathlib import Path
import environ
import os
BASE_DIR = Path(__file__).resolve(strict=True).parent.parent
env = environ.Env()
BASE_DIR
we need to set proper paths to places in our project.
env
will help us properly get configuration from environment variables.
DEBUG = env.bool("DJANGO_DEBUG", False)
DEBUG
should be off at all times except local development environment.
# Allowed Hosts Definition
if DEBUG:
# If Debug is True, allow all.
ALLOWED_HOSTS = ['*']
else:
ALLOWED_HOSTS = env.list('DJANGO_ALLOWED_HOSTS', default=['example.com'])
If DEBUG
is True, then we should allow all HOSTS when opening the app.
SECRET_KEY = env('DJANGO_SECRET_KEY')
Secret Key is used to encrypt/sign cookies, passwords, etc. You must keep it safe and out of the version control.
"""
Project Apps Definitions
Django Apps - Django Internal Apps
Third Party Apps - Apps installed via requirements.txt
Project Apps - Project owned / created apps
Installed Apps = Django Apps + Third Part apps + Projects Apps
"""
DJANGO_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.sites',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.redirects',
]
THIRD_PARTY_APPS = [
'django_extensions',
'rest_framework',
'storages',
'corsheaders',
'djangoql',
'post_office',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'crispy_forms',
]
PROJECT_APPS = [
'usermodel',
]
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + PROJECT_APPS
It is very convenient to differentiate where a certain app comes from, that's why we separate built-in Django apps, third party apps and project's apps.
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.redirects.middleware.RedirectFallbackMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
# Databases
DATABASES = {
"default": env.db("DATABASE_URL")
}
DATABASES["default"]["ATOMIC_REQUESTS"] = True
DATABASES["default"]["CONN_MAX_AGE"] = env.int("CONN_MAX_AGE", default=60)
Our app will receive DATABASE_URL from environment variables in the form of the URL like postgres://username:password@database-host.com:1234/databasename
.
ROOT_URLCONF = 'tutorial.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'tutorial.wsgi.application'
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
]
# User Model Definition
AUTH_USER_MODEL = 'usermodel.User'
For every new Django project, don't forget to create a custom user model, otherwise you effectively will not be able to change it later. We will discuss it a bit later.
TIME_ZONE = 'UTC'
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
USE_I18N = True
USE_L10N = True
USE_TZ = True
These settings are pretty static, but if you want to learn more I recommend to read official docs: https://docs.djangoproject.com/en/3.1/ref/settings/
# Admin URL Definition
ADMIN_URL = env('DJANGO_ADMIN_URL', default='admin/')
Admin, never have admin on the default URL. With DJANGO_ADMIN_URL
env variable you will be able to set it different for every environment: production, staging, but leave default for local development.
# Redis Settings
REDIS_URL = env('REDIS_URL', default=None)
if REDIS_URL:
CACHES = {
"default": env.cache('REDIS_URL')
}
REDIS is primarily used for caching end temporary data storage.
In this case we make it optional, and if REDIS_URL is defined then we enable the default cache.
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
This is how we tell Django to detected that app is working behind SSL proxy.
In our nginx server definition we must set X-Forwarded-Proto
header.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler'
},
},
'loggers': {
'': { # 'catch all' loggers by referencing it with the empty string
'handlers': ['console'],
'level': 'DEBUG',
},
},
}
That's a very simple logging, that should output everything to console from all modules with level of DEBUG
, which means output everything it can.
# Static And Media Settings
AWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME', default=None)
if AWS_STORAGE_BUCKET_NAME:
AWS_DEFAULT_ACL = None
AWS_QUERYSTRING_AUTH = False
AWS_S3_CUSTOM_DOMAIN = env('AWS_S3_CUSTOM_DOMAIN', default=None) or f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=600'}
# s3 static settings
STATIC_LOCATION = 'static'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/'
STATICFILES_STORAGE = 'tutorial.storages.StaticStorage'
# s3 public media settings
PUBLIC_MEDIA_LOCATION = 'media'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{PUBLIC_MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = 'tutorial.storages.PublicMediaStorage'
else:
MIDDLEWARE.insert(2, 'whitenoise.middleware.WhiteNoiseMiddleware')
STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage'
WHITENOISE_USE_FINDERS = True
STATIC_HOST = env('DJANGO_STATIC_HOST', default='')
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = STATIC_HOST + '/static/'
if DEBUG:
WHITENOISE_AUTOREFRESH = True
This is the config for working with static and media files. It would upload static with python manage.py collectstatic --noinput
to S3 and all media files will be stored in S3 when uploaded by user or app itself.
If the the environment variable AWS_STORAGE_BUCKET_NAME
is not present, this part of config will not be enabled, which should be the case for local development.
If AWS_STORAGE_BUCKET_NAME
is not set, Django will use whitenoise
to server static files.
DEFAULT_FROM_EMAIL = env('DEFAULT_FROM_EMAIL', default='test@example.com')
Our app probably will send emails at some point, at least password resets. We should specify what will be the default sender. Again, as env var.
Let's get to third party apps configuration.
First of all, we need to know about any error that will happen in production.
We will set our app to report to the error tracking service https://HoneyBadger.io.
Again, it will be only enabled if the env var HONEYBARDGER_API_KEY
is set. You can get this env var from the project settings in HoneyBadger.
Now let's configure Celery to run our background tasks.
It is optional. If celery
is not installed, or env var CELERY_BROKER_URL
is not defined, then it is not enabled.
# Celery Settings
try:
from kombu import Queue
from celery import Celery
CELERY_BROKER_URL = env('CELERY_BROKER_URL', default='amqp://localhost')
if CELERY_BROKER_URL:
CELERYD_TASK_SOFT_TIME_LIMIT = 60
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = env('REDIS_URL', default='redis://localhost:6379/0')
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default'),
)
CELERY_CREATE_MISSING_QUEUES = True
except ModuleNotFoundError:
print("Celery/kombu not installed. Skipping...")
Now let's configure Django-AllAuth in order to have option to register/login via social logins. Let's have Google as the only provider for this tutorial.
# AllAuth Settings
AUTHENTICATION_BACKENDS += [
# `allauth` specific authentication methods, such as login by e-mail
'allauth.account.auth_backends.AuthenticationBackend',
]
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_UNIQUE_EMAIL = True
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
ACCOUNT_FORMS = {'signup': 'usermodel.forms.MyCustomSignupForm'}
ACCOUNT_MAX_EMAIL_ADDRESSES = 2
SOCIALACCOUNT_PROVIDERS = {
}
SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID = env('SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID', default=None)
if SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID:
SOCIALACCOUNT_PROVIDERS['google'] = {
'SCOPE': [
'profile',
'email',
],
'AUTH_PARAMS': {
'access_type': 'online',
},
# For each OAuth based provider, either add a ``SocialApp``
# (``socialaccount`` app) containing the required client
# credentials, or list them here:
'APP': {
'client_id': SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID,
'secret': env('SOCIALACCOUNT_PROVIDERS_GOOGLE_SECRET'),
}
}
In order for it to work we need to obtain CLIENT_ID and CLIENT_SECRET from Google Cloud services.
As before, I prefer to have it optional. So if env var SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID
is present then we add Google to the list of providers.
# Crispy Forms Settings
CRISPY_TEMPLATE_PACK = 'bootstrap4'
The crispy-forms
package help our forms look good. In this case with bootstrap4
setting it will render our forms according to bootstrap 4 HTML structure.
And let's wrap up with the email sending part.
I love two libraries that help with sending and managing outgoing emails.
One of them is django-post-office
which gives users email templates, scheduling, prioritization and stores outgoing emails, with logs and statuses so you can conveniently debug on apps side if email went out, what exactly was sent and if it wasn't - there will be logs attached to each email.
Second library is django-ses
, for sending emails via AWS Simple Email Service(SES).
# Django Post Office Settings
EMAIL_BACKEND = 'post_office.EmailBackend'
POST_OFFICE = {
'BACKENDS': {
'default': 'django_ses.SESBackend',
},
'DEFAULT_PRIORITY': 'now',
}
# AWS SES Settings
AWS_SES_REGION_NAME = env('AWS_SES_REGION_NAME', default='us-east-1')
AWS_SES_REGION_ENDPOINT = env('AWS_SES_REGION_ENDPOINT', default='email.us-east-1.amazonaws.com')
AWS_SES_CONFIGURATION_SET = env('AWS_SES_CONFIGURATION_SET', default=None)
It's worth some explanation.
When you want to send email via SES, you need to request sending capacity from AWS in a specific region. Before that your account in that region is in email sandbox and can only send emails to yourself, a.k.a. validated email address.
When AWS allows you send email via that region, you must make app aware of it via settings above.
AWS_SES_CONFIGURATION_SET
setting is needed if you have configured AWS CloudWatch to track opens, clicks, and so on. Leave it empty if you haven't.
This wraps up working on our tutorial/settings.py
and here is the full file for you to copy:
from pathlib import Path
import environ
import os
env = environ.Env()
"""
Project Settings
"""
BASE_DIR = Path(__file__).resolve(strict=True).parent.parent
DEBUG = env.bool('DJANGO_DEBUG', default=False)
# Allowed Hosts Definition
if DEBUG:
# If Debug is True, allow all.
ALLOWED_HOSTS = ['*']
else:
ALLOWED_HOSTS = env.list('DJANGO_ALLOWED_HOSTS', default=['example.com'])
SECRET_KEY = env('DJANGO_SECRET_KEY')
"""
Project Apps Definitions
Django Apps - Django Internal Apps
Third Party Apps - Apps installed via requirements.txt
Project Apps - Project owned / created apps
Installed Apps = Django Apps + Third Part apps + Projects Apps
"""
DJANGO_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.sites',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.redirects',
]
THIRD_PARTY_APPS = [
'import_export',
'django_extensions',
'rest_framework',
'storages',
'corsheaders',
'djangoql',
'post_office',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'crispy_forms',
]
PROJECT_APPS = [
'usermodel',
]
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + PROJECT_APPS
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.redirects.middleware.RedirectFallbackMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
# Databases
DATABASES = {
"default": env.db("DATABASE_URL")
}
DATABASES["default"]["ATOMIC_REQUESTS"] = True
DATABASES["default"]["CONN_MAX_AGE"] = env.int("CONN_MAX_AGE", default=60)
ROOT_URLCONF = 'tutorial.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'tutorial.wsgi.application'
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
]
# User Model Definition
AUTH_USER_MODEL = 'usermodel.User'
TIME_ZONE = 'UTC'
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Admin URL Definition
ADMIN_URL = env('DJANGO_ADMIN_URL', default='admin/')
# Redis Settings
REDIS_URL = env('REDIS_URL', default=None)
if REDIS_URL:
CACHES = {
"default": env.cache('REDIS_URL')
}
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler'
},
},
'loggers': {
'': { # 'catch all' loggers by referencing it with the empty string
'handlers': ['console'],
'level': 'DEBUG',
},
},
}
# Static And Media Settings
AWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME', default=None)
if AWS_STORAGE_BUCKET_NAME:
AWS_DEFAULT_ACL = None
AWS_QUERYSTRING_AUTH = False
AWS_S3_CUSTOM_DOMAIN = env('AWS_S3_CUSTOM_DOMAIN', default=None) or f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=600'}
# s3 static settings
STATIC_LOCATION = 'static'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/'
STATICFILES_STORAGE = 'tutorial.storages.StaticStorage'
# s3 public media settings
PUBLIC_MEDIA_LOCATION = 'media'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{PUBLIC_MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = 'tutorial.storages.PublicMediaStorage'
STATICFILES_DIRS = (
# os.path.join(BASE_DIR, "static"),
)
else:
MIDDLEWARE.insert(2, 'whitenoise.middleware.WhiteNoiseMiddleware')
STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage'
WHITENOISE_USE_FINDERS = True
STATIC_HOST = env('DJANGO_STATIC_HOST', default='')
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = STATIC_HOST + '/static/'
if DEBUG:
WHITENOISE_AUTOREFRESH = True
DEFAULT_FROM_EMAIL = env('DEFAULT_FROM_EMAIL', default='test@example.com')
"""
Third Party Settings
"""
# Honeybadger Settings
HONEYBADGER_API_KEY = env('HONEYBADGER_API_KEY', default=None)
if HONEYBADGER_API_KEY:
MIDDLEWARE = ['honeybadger.contrib.DjangoHoneybadgerMiddleware'] + MIDDLEWARE
HONEYBADGER = {
'API_KEY': HONEYBADGER_API_KEY
}
# Celery Settings
try:
from kombu import Queue
from celery import Celery
CELERY_BROKER_URL = env('CELERY_BROKER_URL', default='amqp://localhost')
if CELERY_BROKER_URL:
CELERYD_TASK_SOFT_TIME_LIMIT = 60
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = env('REDIS_URL', default='redis://localhost:6379/0')
CELERY_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default'),
)
CELERY_CREATE_MISSING_QUEUES = True
except ModuleNotFoundError:
print("Celery/kombu not installed. Skipping...")
# AllAuth Settings
AUTHENTICATION_BACKENDS += [
# `allauth` specific authentication methods, such as login by e-mail
'allauth.account.auth_backends.AuthenticationBackend',
]
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_UNIQUE_EMAIL = True
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
ACCOUNT_FORMS = {'signup': 'usermodel.forms.MyCustomSignupForm'}
ACCOUNT_MAX_EMAIL_ADDRESSES = 2
SOCIALACCOUNT_PROVIDERS = {
}
SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID = env('SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID', default=None)
if SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID:
SOCIALACCOUNT_PROVIDERS['google'] = {
'SCOPE': [
'profile',
'email',
],
'AUTH_PARAMS': {
'access_type': 'online',
},
# For each OAuth based provider, either add a ``SocialApp``
# (``socialaccount`` app) containing the required client
# credentials, or list them here:
'APP': {
'client_id': SOCIALACCOUNT_PROVIDERS_GOOGLE_CLIENT_ID,
'secret': env('SOCIALACCOUNT_PROVIDERS_GOOGLE_SECRET'),
}
}
# Crispy Forms Settings
CRISPY_TEMPLATE_PACK = 'bootstrap4'
# Django Post Office Settings
EMAIL_BACKEND = 'post_office.EmailBackend'
POST_OFFICE = {
'BACKENDS': {
'default': 'django_ses.SESBackend',
},
'DEFAULT_PRIORITY': 'now',
}
# AWS SES Settings
AWS_SES_REGION_NAME = env('AWS_SES_REGION_NAME', default='us-east-1')
AWS_SES_REGION_ENDPOINT = env('AWS_SES_REGION_ENDPOINT', default='email.us-east-1.amazonaws.com')
AWS_SES_CONFIGURATION_SET = env('AWS_SES_CONFIGURATION_SET', default=None)
On more step, create a file next to settings.py
call it storages.py
from storages.backends.s3boto3 import S3Boto3Storage
class PublicMediaStorage(S3Boto3Storage):
location = 'media'
default_acl = 'public-read'
file_overwrite = False
class StaticStorage(S3Boto3Storage):
location = 'static'
default_acl = 'public-read'
This is needed for our media and static storage to work, as we refer these classes from settings.py
.
Django Custom User Model
Every Django project that employes Users and authentication should define a custom user model. Even if you keep it same as the stock one, it will give you ability to modify it later to better fit your project's requirements.
Changing user model mid-project is very tricky task since you will have other models instances refering existing Users and all this you will have to migrate.
With that said, in root of the project create a folder usermodel
and an empty file in it __init__.py
.
You could achieve something similar by running python manage.py startapp usermodel
, but i just wanted to have a chance to talk about what makes a directory a python module.
To be able to import a module or anything from it, a directory needs to have __init__.py
in it.
Now create usermodel/models.py
.
This we should put in the file:
import uuid
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin
from django.contrib.postgres.fields import CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils.translation import gettext_lazy as _
from django.utils import timezone
from usermodel.managers import UserManager
class User(AbstractBaseUser, PermissionsMixin):
"""
An abstract base class implementing a fully featured User model with
admin-compliant permissions.
Username and password are required. Other fields are optional.
"""
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
email = CIEmailField(
_('Email Address'),
unique=True,
error_messages={
'unique': _("A user with that username already exists."),
},
)
first_name = models.CharField(_('First Name'), max_length=255, blank=True)
last_name = models.CharField(_('Last Name'), max_length=255, blank=True)
is_staff = models.BooleanField(
_('Staff Status'),
default=False,
help_text=_('Designates whether the user can log into this admin site.'),
)
is_active = models.BooleanField(
_('Active'),
default=True,
help_text=_(
'Designates whether this user should be treated as active. '
'Unselect this instead of deleting accounts.'
),
)
# Audit Values
is_email_confirmed = models.BooleanField(
_('Email Confirmed'),
default=False
)
date_joined = models.DateTimeField(
_('Date Joined'),
default=timezone.now
)
objects = UserManager()
EMAIL_FIELD = 'email'
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = [
'first_name',
'last_name'
]
class Meta:
verbose_name = _('User')
verbose_name_plural = _('Users')
def clean(self):
super().clean()
self.email = self.__class__.objects.normalize_email(self.email)
def get_full_name(self):
"""
Return the first_name plus the last_name, with a space in between.
"""
return f"{self.first_name} {self.last_name}"
def get_short_name(self):
"""Return the short name for the user."""
return self.first_name
def email_user(self, subject, message, from_email=None, **kwargs):
"""Send an email to this user."""
send_mail(subject, message, from_email, [self.email], **kwargs)
We use email as login, UUID as a primary key.
General benefits of UUID vs integer primary key:
- Nobody can estimate number of users/orders/payments by looking at the freshest ID they got when placing another order or creating any other type of object.
- you can't bruteforce your way into seeing other objects, so it adds kind of layer of "security". Even if some links to objects are public, but intended only for those who has the link – they will be hard to access without having exact ID.
- and now the other problem that will be nice to have: biggest integer number seems to big a big number, 2,147,483,647. But when you reach it, then you can't add any other object. Integer is the default type of primary key. When you run out of numbers, database will refuse any new records. You might think, okay I will just change the field type to be a
bigint
, but imagine how much time it will take to apply migration to such huge table of 2 billion records? You can as well take a vacation during that time, that you probably need while building such a big project. 😂 UUID will not have such problem.
Now, in our model we referenced a custom manager.
objects = UserManager()
Let's create a file usermodel/managers.py and put this code in it:
from django.contrib.auth.base_user import BaseUserManager
class UserManager(BaseUserManager):
use_in_migrations = True
def _create_user(self, email, password, **extra_fields):
"""
Create and save a user with the given username, email, and password.
"""
if not email:
raise ValueError('The given username must be set')
email = self.normalize_email(email)
user = self.model(email=email, **extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email=None, password=None, **extra_fields):
extra_fields.setdefault('is_staff', False)
extra_fields.setdefault('is_superuser', False)
return self._create_user(email, password, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_superuser', True)
if extra_fields.get('is_staff') is not True:
raise ValueError('Superuser must have is_staff=True.')
if extra_fields.get('is_superuser') is not True:
raise ValueError('Superuser must have is_superuser=True.')
return self._create_user(email, password, **extra_fields)
And usermodel/admin.py
:
from django.contrib import admin
from django.utils.translation import gettext_lazy as _
from usermodel.models import User
from django.contrib.auth.admin import UserAdmin as DefaultUserAdmin
@admin.register(User)
class UserAdmin(DefaultUserAdmin):
fieldsets = (
(
None,
{
'fields': (
'email', 'password'
)
}
),
(
_('Permissions'),
{
'fields': (
'is_active',
'is_staff',
'is_superuser',
'groups',
'user_permissions',
),
}
),
(
_('Important dates'),
{
'fields': (
'last_login',
'date_joined',
)
}
),
(
_('User data'),
{
'fields': (
('is_email_confirmed',),
)
}
),
)
add_fieldsets = (
(
None,
{
'classes': ('wide',),
'fields': ('email', 'password1', 'password2'),
}
),
)
list_display = ('email', 'first_name', 'last_name', 'is_staff')
search_fields = ('first_name', 'last_name', 'email')
ordering = ('email',)
And cherry on top will be a non-interactive management command to create a superuser.
While we have a way to create a super user, it is not convenient for non-interactive cases like release phase.
What I chose to have in all my projects is this script, which checks if there is a superuser in database and if not - creates one with a random password.
Create these directories and files inside usermodel
directory:
├── management
│ ├── __init__.py
│ └── commands
│ ├── __init__.py
│ └── makesuperuser.py
__init__.py
should be empty, and here is the code for usermodel/management/makesuperuser.py
:
from django.contrib.auth import get_user_model
from django.core.management.base import BaseCommand
from django.utils.crypto import get_random_string
User = get_user_model()
class Command(BaseCommand):
def handle(self, *args, **options):
try:
u = None
if not User.objects.filter(email='admin@example.com').exists() and not User.objects.filter(
is_superuser=True).exists():
print("admin user not found, creating one")
email = 'admin@example.com'
new_password = get_random_string()
u = User.objects.create_superuser(email, new_password)
print(f"===================================")
print(f"A superuser was created with email {email} and password {new_password}")
print(f"===================================")
else:
print("admin user found. Skipping super user creation")
print(u)
except Exception as e:
print(f"There was an error: {e}")
Now let's create migrations for our custom user model.
Run this:
docker-compose run web python manage.py makemigrations usermodel
Output of command should be something like this:
Migrations for 'usermodel':
usermodel/migrations/0001_initial.py
Open this file with initial migration.
On top of the file add import command:
from django.contrib.postgres.operations import CITextExtension
And add CITextExtension(),
as the first element of operations
list.
The file should look like this:
# Generated by Django 3.1.7 on 2021-03-08 13:10
import django.contrib.postgres.fields.citext
from django.contrib.postgres.operations import CITextExtension
from django.db import migrations, models
import django.utils.timezone
import usermodel.managers
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('auth', '0012_alter_user_first_name_max_length'),
]
operations = [
CITextExtension(),
migrations.CreateModel(
name='User',
fields=[
('password', models.CharField(max_length=128, verbose_name='password')),
('last_login', models.DateTimeField(blank=True, null=True, verbose_name='last login')),
('is_superuser', models.BooleanField(default=False, help_text='Designates that this user has all permissions without explicitly assigning them.', verbose_name='superuser status')),
('id', models.UUIDField(default=uuid.uuid4, primary_key=True, serialize=False)),
('email', django.contrib.postgres.fields.citext.CIEmailField(error_messages={'unique': 'A user with that username already exists.'}, max_length=254, unique=True, verbose_name='Email Address')),
('first_name', models.CharField(blank=True, max_length=255, verbose_name='First Name')),
('last_name', models.CharField(blank=True, max_length=255, verbose_name='Last Name')),
('is_staff', models.BooleanField(default=False, help_text='Designates whether the user can log into this admin site.', verbose_name='Staff Status')),
('is_active', models.BooleanField(default=True, help_text='Designates whether this user should be treated as active. Unselect this instead of deleting accounts.', verbose_name='Active')),
('is_email_confirmed', models.BooleanField(default=False, verbose_name='Email Confirmed')),
('date_joined', models.DateTimeField(default=django.utils.timezone.now, verbose_name='Date Joined')),
('groups', models.ManyToManyField(blank=True, help_text='The groups this user belongs to. A user will get all permissions granted to each of their groups.', related_name='user_set', related_query_name='user', to='auth.Group', verbose_name='groups')),
('user_permissions', models.ManyToManyField(blank=True, help_text='Specific permissions for this user.', related_name='user_set', related_query_name='user', to='auth.Permission', verbose_name='user permissions')),
],
options={
'verbose_name': 'User',
'verbose_name_plural': 'Users',
},
managers=[
('objects', usermodel.managers.UserManager()),
],
),
]
Now you can apply migration:
docker-compose run web python manage.py migrate
The output should look like this:
Using selector: EpollSelector
Operations to perform:
Apply all migrations: account, admin, auth, contenttypes, post_office, redirects, sessions, sites, socialaccount, usermodel
Running migrations:
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying auth.0012_alter_user_first_name_max_length... OK
Applying usermodel.0001_initial... OK
Applying account.0001_initial... OK
Applying account.0002_email_max_length... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying post_office.0001_initial... OK
Applying post_office.0002_add_i18n_and_backend_alias... OK
Applying post_office.0003_longer_subject... OK
Applying post_office.0004_auto_20160607_0901... OK
Applying post_office.0005_auto_20170515_0013... OK
Applying post_office.0006_attachment_mimetype... OK
Applying post_office.0007_auto_20170731_1342... OK
Applying post_office.0008_attachment_headers... OK
Applying post_office.0009_requeued_mode... OK
Applying post_office.0010_message_id... OK
Applying post_office.0011_models_help_text... OK
Applying sites.0001_initial... OK
Applying redirects.0001_initial... OK
Applying sessions.0001_initial... OK
Applying sites.0002_alter_domain_unique... OK
Applying socialaccount.0001_initial... OK
Applying socialaccount.0002_token_max_lengths... OK
Applying socialaccount.0003_extra_data_default_dict... OK
src/tutorial %
Now run the makesuperuser
management command.
docker-compose run web python manage.py makesuperuser
The output will contain user password, copy it somewhere, you will need it to log in to admin panel.
admin user not found, creating one
===================================
A superuser was created with email admin@example.com and password rWKwHw5FK6tw
===================================
admin@example.com
src/tutorial %
Try running this command again.
admin user found. Skipping super user creation
None
src/tutorial %
See, second time no user is created.
Congratulations, we are finished with our custom user model!
Procfile for Django Project
Procfile is the file which tells Appliku Deploy how to run your application.
This file must be located in the root of the project.
There are 3 types of records:
web
release
- other
web
process will be the one that handles HTTP requests.
release
will hold the command that is executed on each release, like applying migrations and other activities.
All other processes have no special meaning. For example, you can put celery worker, scheduler or anything else specific to your app.
Procfile can only have one web
process and one release
process. You can have as many other type of processes as you need.
For our tutorial this will be the Procfile
:
web: gunicorn tutorial.wsgi --log-file -
release: bash release.sh
Create a file in the root of the project release.sh
:
#!/bin/bash
python manage.py migrate --noinput
python manage.py makesuperuser
release.sh
will get executed on every new release and it will apply migrations and try to create a superuser in our app. As you remember only on the first release the user will be created.
Structure of Django Project
Let's take a look at our project before we start with deployment.
src/tutorial % tree
.
├── Dockerfile
├── Procfile
├── docker-compose.yml
├── manage.py
├── release.sh
├── requirements.txt
├── tutorial
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── storages.py
│ ├── urls.py
│ └── wsgi.py
└── usermodel
├── __init__.py
├── admin.py
├── management
│ ├── __init__.py
│ └── commands
│ ├── __init__.py
│ └── makesuperuser.py
├── managers.py
├── migrations
│ ├── 0001_initial.py
│ ├── __init__.py
└── models.py
Push your Django Application to GitHub
In order to deploy your app you need it to be pushed to a GitHub repository.
At this point of tutorial it is assumed that you have account on GitHub. It you don't, go to https://github.com and sign up.
Then create a repository, let's call it django_appliku_tutorial
.
For purposes of privacy you can make repository private, or you can make it public so your peers or future employers can see what you were up to :)
Just remember: never store any credentials in the code. Also remember: whatever you delete from the code, stays in repository history.
Hit create repository button.
Now you are looking at the empty repository page. Let's use their instructions from the section "…or create a new repository on the command line".
Here is the example they gave me.
You will have to change the path to your repository.
Also since we have files we want to add all files git add .
Go to terminal to the root of the project and write these commands:
echo "# django_appliku_tutorial" > README.md
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin git@github.com:appliku/django_appliku_tutorial.git
git push -u origin main
After you pushed the code, the GitHub page of repository should look like this:
Deploying Django Project
Instead of doing old fashined manual deployment, writing a lot of configs and hoping it works, we'll use Appliku Deploy.
What Appliku Deploy does:
- Provisions a server in Digital Ocean(or AWS)
- Takes the code from GitHub repo,
- Builds your app on your server
- Deploys your app on your server
- Sets up web server (nginx) and issues SSL certificate from Let's Encrypt.
In order to deploy your app you should have accounts in GitHub, Digital Ocean and Appliku Deploy.
In case if you don't have them follow these links and create them:
- GitHub.com https://github.com/join
- Digital Ocean https://cloud.digitalocean.com/registrations/new
- Appliku Deploy https://app.appliku.com/
If you just registered on Appliku Deploy and signed in, make sure to complete onboarding, connecting the account to GitHub and Digital Ocean.
First step is to create a server.
It is done via Appliku Deploy interface. Please note, that you can't re-use an existing server you created manually via Digital Ocean interface. It must be provisioned via Appliku Deploy.
Go to the Servers tab: https://app.appliku.com/servers
Click "Create New Server" button and you will be taken to Provider selection page ( https://app.appliku.com/providers
Select DigitalOcean.
You will be taken to the page where you can select type of Droplet you want to provision ( https://app.appliku.com/new-server-digital-ocean ).
For the purpose of this tutorial we will select cheapest available server type 1gb RAM, 1 CPU, 25GB SSD for $5/month and region: 🇳🇱AMS3.
Click "Create A Server".
After this you will be taken to the server list. You will see your server without any details. It is because Digital Ocean haven't fully provisioned it yet. When server is provisioned IP address and server size will appear. Progress of provisioning is displayed in the rightmost column.
You can click on the server name to see server details. This page is updated regularly to reflect server's current status.
When the "Status" becomes "Active" it means that Digital Ocean finished provisioning server.
At this moment the "Setup" field should say "Started".
It means that Appliku Deploy connected to server and is running setup scripts. It will install software needed to run your apps: Docker, Nginx, certbot and configure them.
It should take 2-3 minutes to complete setup.
You can look at progress by going to "Setup Logs" tab. Please keep in mind that this page is not updated on its own and you will have to refresh the page to see latest records.
Back to server's Overview tab: When the "Setup" field says "Finished" it means you can create an app and deploy it on this server.
If the "Setup" field says "Failed" then you can check on "Setup Logs" tab to try to figure out what happened.
Most popular reason for failure is that cloud provider gave us a bad server, that was unable to reach internet due to networking or disk issues. It is a rare occasion but it happens.
In this case you should click "Manage Server in Digital Ocean Panel", destroy the server and back in Appliku Deploy interface create another server.
If you still see the old server in the list - open the server details and it will make page refreshed with the current status of the server. If it is deleted, status of the server will become "deleted" and server will be gone from the server list. You can now create another server.
Create an application. Go to "Applications" tab and click "New App From GitHub" ( https://app.appliku.com/start ).
You will see the form "Creating a new application".
Fill the Application Name, pick repository(that we created earlier), branch(main
) and select the server to deploy to.
After that you can click "Create an Application".
You will find yourself on the page of a newly created application.
Let's go over this page real quick.
First section is build settings.
It says that the base image is Python 3.8, which means that under the hood Appliku Deploy will use python:3.8 Docker image to build the image with your app.
You can specify the "build command" which will be executed as the last statement of our Dockerfile. Keep in mind that we pass all environment variables to the build, so your build command will be able to use it.
If you need to use your own Dockerfile instructions, you can select "Custom Dockerfile" in dropdown and put instructions in text field.
Application Processes Section
In this list you will see all records from the Procfile
in your repository except release
.
Choose what processes you want to enable. For now we have only web
, so switch the toggle to On
.
Config Variables Section
In this section you should specify environment variables that will be passed to your application when running as well as at build stage.
Let's create several variables.
DJANGO_SECRET_KEY
give this variable some long value like "foisr45r4ufuihsuihiuh3rluhihuihrui4wh4uihu4huiwhui44343423" that nobody will ever guess.
DJANGO_ALLOWED_HOSTS
should contain your appname + applikuapp.com
. Or any other domain you will later attach to the site. In our case it is djangoapplikututorial.applikuapp.com
.
Heroku Config Vars Sync
If you are migrating this application from Heroku, this this feature will come in handy. You need to enter your Heroku API key and application name and it will be continuously pulling any changes in Config Vars from Heroku and update them for your app. Keep in mind that editing config vars in appliku when Sync is enabled makes no sense – they will be overriden on the next sync.
Now we need to create a database and a redis instance.
So instead of deploying application right now we should click on "Continue to Application Overview".
This how application dashboard looks like for a new app:
Go to Databases tabs.
Databases
Click "New Database", select Postgres and select your server.
Click "Create Database"
Your new postgres database should appear in the list.
You can see the type of the database, State and credentials URL.
When the State column is Deployed, it means that your database is ready to accept connections.
We also need a redis instance.
Let's add it the same way.
Click the "New Database" button, choose redis and the same server. Click "Create Database".
Redis instance should appear on the list, just like the postgres one.
Now we can go back to editing our config vars to make use of our new databases.
Go to application's Settings tab.
Click "Reveal Config Vars"
That's what we have there right now:
Postgres and redis credentials were added to their own, instance specific variables.
What we need now is to create DATABASE_URL with the value from DATABASE_72_URL and a REDIS_URL with value from REDIS_73_URL.
The reason why we have to do this manually is because you can have multiple databases of the same type and you are responsible for setting your application's env vars to use.
That's how config vars should look at this stage.
Keep in mind that the number in DATABASE_72_URL
and REDIS_73_URL
will be different for you.
Now we are all set with config vars, you should go to the Overview tab.
We are ready to start the first deployment.
Click "DEPLOY NOW" button.
Information about the current deployment will appear:
When it is finsihed, status will reflect that, saying it is finished:
Let's click Manage Deployments and see our deployment logs for the generated admin password.
You are at the "Deploys" tab.
First card has the form with deployment settings. You can change the repository branch, server to deploy to and toggle Push to Deploy feature, that starts deployment on push to GitHub repository.
Second card contains history of deployments.
In this example I have two deployments. First one failed for me, because I made a typo in settings file, while writing this tutorial. You can see that it says "Failed Building". This means it obviously failed, but another piece of information – we know at what stage it has failed: Build stage.
You can click "View logs" to find out why it could fail.
Our focus right now should be on the successful deployment.
It says "Finished Cleaning up".
There are several stages of deployments in Appliku Deploy:
- New - Deployment was just created, but nothing yet has been done.
- Building - your server is pulling code from repository and building the image
- Deploying - deploying our image
- Releasing - release command is being executed
- Cleaning up - your server is cleaning up obsolete docker image layers to free up some disk space.
When you click "View logs" you will see the password somewhere in the end of the logs window.
Let's open our app. In the Application navigation click "Open App". Your site will open in new window.
You should see "Not Found".
It is expected because we don't have any pages defined.
If you get 502 Gateway error, then you forgot to enable the "Web" process. Go back to the "Processes" tab and enable Web worker.
Then on the overview page click "Apply Processes & Env Vars" button. This should apply changes faster than the whole build.
If you see 400 Bad Request error then you didn't spell the domain name correctly in env var DJANGO_ALLOWED_HOSTS
.
To fix it go to application "Settings" tab and edit the value for DJANGO_ALLOWED_HOSTS
to match the domain.
If there is any other error, try redeploying the app and watching logs for anything error-related. There is also application "Log" tab that can help you figure out what can be the problem.
Now let's go to admin interface of your Django Project.
Add /admin
to the end of your webiste.
You will see Django Login form.
Enter admin@example.com
as the login. Password is the one generated, that you saw in Deployment logs.
Hit the "Log In" button.
You should see the Django admin panel.
Congratulations!
You have just created your very first Django app and deployed it on Appliku Deploy. All that without learning anything devops: nginx, certificates, etc. All is done for you.
New articles coming soon, you will be able to expand functionality of your Django Project with sending emails, accepting payments and building a proper SaaS product.
Happy deploying!
Top comments (8)
Thanks. I’ve put quite a lot of effort into that :)
I have just finished another article, with equal amount of detail about AWS SES and Django.
appliku.com/post/how-send-email-dj...
Will repost here in a bit.
Will be happy to see you in our discord community discord.gg/skBbVXGZZx
Thanks. This is awesome! Very clear and informative. Thanks for your efforts in this.
Really really awesome blog. It's been while since I've read this details and easy to understand (even for beginner+ djano dev).
Outlined in simple terms, with just enough details.
Going to use some of your methods while creating my next project.
Really appreciate your efforts man! 👏
Best & most informative things about Django 👏
Well done for putting this article together Kostjja
Thank you, Fajar!
This is like an entire book. Awesome job. I see Django views/templates look very similar to JSP's. What complimentary client side technology do you like, for example Jquery, WebRocketX etc?
Gracias!!
Thank for sharing this post.