DEV Community

zchtodd
zchtodd

Posted on • Edited on • Originally published at theparsedweb.com

Building a SaaS App: Beyond the Basics (Part II)

By the end of this post, you will have a deployable app that is ready to serve real users efficiently and securely!

If you haven't read the first post in the series, this is a step by step guide on building a SaaS app that goes beyond the basics, showing you how to do everything from accept payments to manage users. The example project is a Google rank tracker that we'll build together piece by piece, but you can apply these lessons to any kind of SaaS app.

In the last post, we built out the Puppeteer script that will do the actual scraping. In this post, we're going to focus on infrastructure – namely, how to set up and deploy the application.

For this project, I'm using NGINX, Flask, and Postgres on the back-end. We'll be using React for the front-end. Docker and Docker Compose will make it easier to deploy anywhere.

You can find the complete code on GitHub.

Table of Contents

Setting up Docker and Docker Compose

A real SaaS app will be deployed to many environments: developer laptops, a staging environment, and a production server, to name just a few. Docker makes this both an easier and more consistent process.

Docker Compose orchestrates multiple containers, so that we can manage the entire application reliably. That orchestration is limited, however, to one host. Many apps will never need to scale beyond one host, but options like Kubernetes exist should your app become that successful!

To get started, we'll need to have Docker and Docker Compose installed on the host.

curl -fsSL https://get.docker.com -o get-docker.sh # Download install script.
sudo chmod u+x ./get-docker.sh # Make script executable.
sudo ./get-docker.sh 
sudo usermod -aG docker $USER # Add current user to the docker group.
newgrp docker # Reload groups so that changes take effect.
Enter fullscreen mode Exit fullscreen mode

Docker should now be installed. Use docker ps to verify that Docker is installed correctly. You should see something like this.

ubuntu@ip-172-31-38-160:~$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
Enter fullscreen mode Exit fullscreen mode

Installing Compose is fairly straightforward as well.

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Enter fullscreen mode Exit fullscreen mode

Deploying the development version

Now that Docker is installed, we can jump straight to starting the application. Use Git to clone the repository if you haven't already.

Once the repository is cloned, you can start up the application simply by running docker-compose up -d and waiting for the images to download and build. Docker will pull the NGINX and Postgres images, as well as build the image for the app container.

You can run docker ps after the image building and downloading is complete. The output should be similar to the below.

CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS          PORTS                                       NAMES
0cc1d1798b49   nginx                 "/docker-entrypoint.…"   4 seconds ago    Up 3 seconds    0.0.0.0:80->80/tcp, :::80->80/tcp           openranktracker_nginx_1
eb3679729398   open-rank-tracker     "python tasks.py wor…"   51 seconds ago   Up 49 seconds                                               openranktracker_app-background_1
ab811719630a   open-rank-tracker     "gunicorn --preload …"   51 seconds ago   Up 49 seconds                                               openranktracker_app_1
df8e554d7b12   postgres              "docker-entrypoint.s…"   52 seconds ago   Up 50 seconds   0.0.0.0:5432->5432/tcp, :::5432->5432/tcp   openranktracker_database_1
68abe4d03f62   redis:5.0.4-stretch   "docker-entrypoint.s…"   52 seconds ago   Up 50 seconds   6379/tcp                                    openranktracker_redis_1
Enter fullscreen mode Exit fullscreen mode

If you've never used Docker before, then this might seem like magic, but the Dockerfile and docker-compose.yml files contain all of the relevant details. The first contains instructions for building the Flask API container, and the second specifies all of the images that make up the application.

You may notice that we have docker-compose.yml as well as docker-compose.prod.yml. This is how we'll manage the differences in deployment between development and production versions. There are typically several important differences between environments, such as how SSL certificates are handled.

Understanding how NGINX and Flask work together

Although Flask has its own built-in web server, we'll use NGINX to process requests from the user. The Flask web server is meant only for development purposes and serves requests using a single thread, making it unsuitable for our API, and especially unsuitable for serving static files.

NGINX acts as a proxy, forwarding API requests over to Flask. We'll use Gunicorn to overcome our single-threaded Flask issue. Gunicorn manages a pool of processes, each running its own instance of Flask and load balancing between them. This may sound complicated, but the setup is managed within just a few small files.

Let's take a look at how nginx.conf is configured first.

worker_processes 4;

events { worker_connections 1024; }

http {
    include /etc/nginx/mime.types;

    server {
        listen 80;
        listen [::]:80;

        location / {
            root /static;
            try_files $uri $uri/ /index.html;

            add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
        }

        location /api {
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Host $host;
            proxy_pass http://unix:/sock/app.sock:/api;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The server block tells NGINX to listen on port 80, while the location blocks define what should happen when a request URL matches a certain pattern. The order of location blocks is important – the first block can match any request, but the second block is more specific and applies to requests starting with /api as their path.

The second location block forwards the request to Flask by using the proxy_pass directive. The http://unix:/sock/ means that the network traffic will be over a Unix domain socket. The app.sock is a file that is shared between NGINX and Flask – both read and write from this domain socket file to communicate. Lastly, :/api means that the receiving side, Flask, should get requests prefixed with /api.

The X-Forwarded-Proto component will become important later when we introduce SSL in our production configuration. This directive will cause NGINX to proxy requests with the same protocol, so if a request was made over HTTPS, then Flask will receive that same request over HTTPS. This is important when implementing features like signing in with Google, because OAuth libraries require that every request be made over SSL.

Now let's take a look at the section of the docker-compose.yml file that defines how NGINX and Flask are deployed.

version: '3'

volumes:
    sock:

services:
    nginx:
        image: nginx
        restart: always
        volumes:
            - ./nginx.conf:/etc/nginx/nginx.conf
            - sock:/sock
        ports:
            - "80:80"

    app:
        command: gunicorn --preload --bind=unix:/sock/app.sock --workers=6 wsgi
        restart: always
        image: open-rank-tracker
        build: .
        volumes:
            - sock:/sock
Enter fullscreen mode Exit fullscreen mode

The most relevant part here is the sock volume definition. By declaring sock as a top-level volume, we can share it between NGINX and Flask, allowing them to use it as a Unix domain socket.

Testing the NGINX and Flask configuration

We don't have to wait until we're building the UI to test whether this configuration is working or not. You can test this deployment using a browser, or even a simple command-line program like curl.

Because we haven't touched on the UI yet, we'll need to create a basic index.html file before we can really do any testing. Create an index.html file under the static directory within the project root.

sudo touch static/index.html
sudo bash -c 'echo "Hi, world" > static/index.html'
curl http://localhost
Enter fullscreen mode Exit fullscreen mode

Using curl or going to http://localhost (or to the IP of your server if deployed elsewhere) in your browser should show Hi, world in response. This means the request matched the first location block in nginx.conf – in fact, any request you send that doesn't start with /api should return Hi, world at this point.

If you try going to http://localhost/api in your browser, you'll see the Flask 404 page instead. We haven't defined any routes in Flask yet, so the 404 is expected, but we know that NGINX and Flask are configured properly at this point.

Flask 404

Postgres Configuration

Setting up Postgres with Docker is fairly simple. I'll show you the docker-compose.yml configuration below, and walk through a few of the most important sections.

database:
    image: postgres
    restart: always
    volumes:
       - /var/lib/postgres:/var/lib/postgres
    expose:
       - 5432
    env_file:
       - variables.env
Enter fullscreen mode Exit fullscreen mode

We name the service database, which is important, because that's the host name other containers can use to connect with Postgres. The volumes directive maps a directory on the host to a matching directory within the container, so that if the container is stopped or killed, we haven't lost the data.

The expose directive allows other containers access on port 5432, but does not allow access outside of the Docker network. This is an important distinction for security purposes. We could also use the ports directive, which would allow access to 5432 from the Internet. This can be helpful if you want to connect remotely, but at that point your Postgres password is the only thing preventing the entire world from gaining access.

Finally, the env_file tells Compose where to look for environment variables. These variables are then passed into the container. The Postgres image has just one required environment variable – POSTGRES_PASSWORD that must be defined, but we'll define a few others as well.

POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_HOST
POSTGRES_DB
Enter fullscreen mode Exit fullscreen mode

Because they're listed without values in variables.env, each variable takes its value from the host environment. You can also hard code values inside the config file, but it's better to keep them out of source control, especially with values such as passwords or API keys.

Let's test out connecting to the Postgres instance using the psql command-line program. First, find the ID of the Postgres container using docker ps, and then we'll connect locally using docker exec.

docker exec -it ba52 psql -U pguser -d openranktracker
Enter fullscreen mode Exit fullscreen mode

If all goes well, you'll be greeted with the Postgres interactive shell prompt.

Setting up SSL with Let's Encrypt

We'll need to set up SSL certificates via Let's Encrypt before we can deploy the production version of the app. This is a quick process that involves proving to Let's Encrypt that you are the owner of the server, after which they will issue certificate files.

You'll need a domain name before obtaining a certificate. I'm using Google Domains, but any domain registrar should work.

Installing the certbot agent is the first step in the process.

sudo apt-get install -y certbot
Enter fullscreen mode Exit fullscreen mode

Now we can request a certificate, but first make sure that port 80 is available – if the app is running, be sure to stop it first so that certbot can use port 80.

sudo certbot certonly --standalone --preferred-challenges http -d openranktracker.com
Enter fullscreen mode Exit fullscreen mode

Of course, you should replace openranktracker.com with your own domain name. Certificates are valid for 90 days, after which a simple renewal process is required. We'll go through setting up an automated renewal process a bit later.

Deploying the production version

What we've set up so far is great for local development on a laptop. In the real world, however, our app should at least have SSL enabled. Luckily, it's not hard to go that extra step for our production configuration.

We'll take advantage of a Compose technique known as stacking to make the configuration change as simple as possible. Instead of having to redefine everything in the separate docker-compose.prod.yml file, we only need to specify what is different, and those sections will take precedence.

version: '3'

services:
    nginx:
        image: nginx
        restart: always
        volumes:
            - /etc/letsencrypt:/etc/letsencrypt
            - ./nginx.prod.conf:/etc/nginx/nginx.conf
            - ./static:/static
            - sock:/sock
        ports:
            - "443:443"
            - "80:80"
Enter fullscreen mode Exit fullscreen mode

This file contains only the NGINX service, because the configuration for the app and database remain the same. The volumes section exposes the Let's Encrypt certificate to the NGINX container, and the modified nginx.prod.conf makes use of the certificate to serve the application over HTTPS.

Let's take a look at the nginx.prod.conf file to see how SSL is handled.

worker_processes 4;

events { worker_connections 1024; }

http {
    include /etc/nginx/mime.types;

    server {
        listen 80;
        listen [::]:80;
        server_name _;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl default_server;

        ssl_certificate /etc/letsencrypt/live/openranktracker.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/openranktracker.com/privkey.pem;

        location / {
            root /static;
            try_files $uri $uri/ /index.html;

            add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
        }

        location /api {
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Host $host;
            proxy_pass http://unix:/sock/app.sock:/api;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

This should look mostly familiar, except we now have two server blocks: one listens on port 80 and redirects traffic to port 443, while the other listens on 443 and serves the app as well as static files. If you try going to the HTTP version, your browser should be immediately redirected to the HTTPS version.

We'll use a stacked command with Compose to bring up the app with this configuration.

docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Enter fullscreen mode Exit fullscreen mode

And presto! You should now have NGINX serving requests with SSL enabled.

What's next?

I hope you liked the second part of the SaaS app series! Up next, we'll start building the data model for the application, and set up the first route handler, so that the scraper we built in part one has a place to report its results.

Top comments (0)