Introduction
This article provides a step by step guide on deploying Geolocation API to your own Google Cloud project. This article will focus on using the automation tools and scripts in deploying a FastAPI python web application to Google Cloud’s Cloud Run, a serverless containers platform.
For more information on the design and tech stack used, please refer to Building Geolocation API. In this article, we will focus on deployment.
As an overview, the project has five distinct components:
- A FastAPI application that handles web requests and queries IP addresses to retrieve geolocation data from MaxMind GeoLite2 databases.
- A
Dockerfile
that automates the process of downloading the GeoLite2 databases and building a lightweight Docker image. - A
cloudbuild.yaml
file that defines the necessary steps for the build pipeline, which includes building the Docker image and deploying a new revision to Cloud Run. - A CDKTF (Cloud Development Kit for Terraform) application that deploys the required cloud infrastructure for the Geolocation API.
- A
deploy.sh
shell script that sets up the Cloud Shell VM environment for deployment.
For the purpose of deployment, we will only be executing deploy.sh
shell script which will take care of setting up our environment and deploying our CDKTF application. However, before we run our script, we need to manually do a few things first. Please continue reading to find out more.
Accompanying Youtube Video
If you'd rather watch a video instead of reading a long article, here is a YouTube video that accompanies this content.
Live Demo
Explore a live demo of the Geolocation API hosted on my Google Cloud platform, running as a Cloud Run revision, on this page. Enter a valid public IP address, whether IPv4 or IPv6, to retrieve its details. If no input is provided, the API will return details of your own IP address.
Overview of deployment steps
- We will start with getting a MaxMind developer account to obtain a license key. This key will be used in our
Dockerfile
to download latest GeoLite2 databases. - We will login into our Google Cloud Platform to create a project and use Google Cloud Shell to deploy required components.
- We will fork the GitHub repository to be able to connect to Cloud Build trigger.
- We will connect our GitHub repository to Cloud Build trigger to use it as a source.
- We will use
.env
file to customise and configure our environment. - We will use
deploy.sh
shell script to prepare Google Cloud Shell Console VM and deploy our FastAPI application.
Let's begin by going through each item on this list, one at a time.
Create a MaxMind Developer Account
To set up automatic database updates within our container, we need to create a MaxMind account and obtain a license key. To do this:
- Go to the MaxMind website and sign up for an account to access GeoLite2 databases.
- Check your email for a verification link. Click on the link to set a password for your account.
- Log in to your MaxMind account. You will receive a verification code via email. Copy and paste the verification code to complete the authentication process.
- On your account dashboard, click on "Manage License Keys" in the left sidebar.
- Click on "Create New License Key" and enter a name for your license key.
- Click on "Create" to generate your license key.
Keep your license key window open as we will need it shortly.
Create a Google Cloud Project
Before we begin, please make sure that you are logged in to your Google Cloud account and have a valid billing account. You may be charged for using Google Cloud services, but Google Cloud’s free tier is more than enough to test our deployment without incurring any cost.
The free tier includes a set of Google Cloud services that you can use for free, up to certain usage limits. If you exceed the usage limits, you will be charged for the additional usage.
The free tier is a great way to try out Google Cloud services and to test our deployment without incurring any cost.
If you do not want to continue using the service, our CDKTF implementation makes it very easy to delete the deployed resource in just one command.
- Go to the Google Cloud Console.
- Click on the Sign in button.
- Enter your Google Account email address and password.
- Click on the Sign in button.
Once you are logged in, you will be taken to the Google Cloud Platform console. From here, you can start using Google Cloud services.
To create a new Google Cloud project for deploying our geolocation service, follow these steps:
- Click on the Select Project button located in the top-left corner of the GCP Console.
- Click on "New Project" to create a new project.
- In the "Project Name" field, enter "geolocation."
- Make a note of the random project ID that Google Cloud assigns. This is required to be unique globally, so Google Cloud assigns a random number to the name.
- Click "Create."
- Click on the Select Project button and select the newly created project to make it the active project.
You can also select the project by clicking on Select Project dropdown button located in top-left corner of the GCP console and then selecting the project.
To learn more about creating a project on Google Cloud Platform, follow this guide.
Fork and Clone the Git Repository
We need to fork the Git repository because Cloud Build only allows connecting to repositories from our own GitHub account, even if the repository is public. Forking a repository creates a copy of it in our own account, which we can then clone to our Cloud Shell environment. Once the repository is cloned, we can connect it to our Cloud Build trigger.
Here is a more detailed explanation of the steps involved:
- Go to the GitHub repository.
- Click the Fork button in the top right corner of the page.
- This will create a copy of the repository in our own GitHub account.
- Once the repository has been forked, click the Clone button.
- In the Clone with HTTPs section, copy the URL of the repository.
- Open a terminal window in our Cloud Shell environment.
- Type the following command to clone the repository:
git clone <URL>
Replace <URL>
with the URL that we copied in step 5.
The repository will be cloned to our Cloud Shell environment.
Connect git repository
In this step, we authenticate and connect GitHub repository to cloudbuild trigger.
To connect our GitHub repository to Cloud Build, follow these steps:
- Go to the Repositories page in the Google Cloud Console.
- Scroll to the bottom of the page and click Connect Repository.
- Leave the Region as global and make sure GitHub is selected.
- Click Continue.
- If you are not already logged in to GitHub, you will be prompted to do so.
- Once you are logged in, click Authorize Google Cloud Build by GoogleCloudBuild.
- Click Connect to connect your repository.
Do not create a trigger yet. We will do it using cdktf.
Create .env
file
Our CDKTF stacks depend on values from .env
file for setting up our infrastructure
To set up .env
file, follow these steps:
- Click the Activate Cloud Shell button at the top of the page.
- Wait for the Cloud Shell session to open.
- Click "Open Editor" to open file editor.
- Copy the contents of the
example_env.txt
file. - Create a new file and paste the copied content.
- Save the file as
.env
in the root directory of your cloned repository. - Find the random numeric ID assigned to your project ID by Google and set it to
RANDOM_ID
variable. - Set your preferred Google Cloud region to variable
REGION_PREFERRED
. - Set the URL of your forked git repository to variable
GIT_SOURCE_REPOSITORY
. - If you will be using this API from a front end, set the
FASTAPI_CORS_ORIGINS
variable accordingly. - Find the account ID and license key of your MaxMind account and assign these values to
GEOIPUPDATE_ACCOUNT_ID
andGEOIPUPDATE_LICENSE_KEY
environment variables respectively.
Be sure to remove any unwanted spaces after the equal sign or after the variable value.
Our .env
file is now configured and we are ready to deploy our Cloud Run service.
Run deploy.sh
script
In this final step, we execute our deploy.sh
script. In order to learn what our script does in detail, please refer to Building Geolocation API video. From a deployment standpoint, in a fresh and clean cloud shell environment, our script will first prepare our Cloud Shell VM and then deploy our CDKTF stacks one by one.
When I initially started working on the project, I began documenting the steps to write a detailed guide for setting up the Cloud Shell VM to deploy our microservice. However, I realized that most of this work can be easily automated and a simple shell script would be more beneficial and user-friendly for everyone involved.
Consequently, I went ahead and created a shell script that handles the setup and deployment of our foundational CDKTF stacks. We then had to trigger the build manually once before we could deploy our third and final CDKTF stack to deploy our Cloud Run service.
In this version of the script, I was using pyenv
to set up python version globally and there were a couple of manual steps to be taken in the exact order at a particular point in time. This was making it a little difficult to understand and troubleshoot. So, I spent some more time to refactor and make the script better and the result was fully automated end to end deployment of our microservice Geolocation API with just one command.
To execute our script, make sure you are in the project root directory and run ./deploy.sh
script.
# make sure we are in the right directory
cd ~/geolocation
./deploy.sh
If prompted, select Authorize
to continue.
Now, sit back and watch our Cloud Shell VM get set up and our Cloud Run service get built, tested, and deployed on Cloud Run.
This script actually does a lot of work and to understand more, please watch my video about Building Geolocation API where I walk through the code and delve into how various tools work together and how this shell script brings everything together. It should take approximately 12 minutes for setting up VM, building the container image, and then deploying a cloud run revision.
The Finish Line: Completing the Deployment Journey
At this stage, we should have a running Cloud Run revision for our geolocation service. You can check the status of our deployed Cloud Run service here. Click on the service link to open the Cloud Run page and access the url the service is hosted on.
Also, check for configured weekly schedule to trigger our Cloud Build here. So, every week, our Cloud Build will get triggered and rebuild the image with updated MaxMind GeoLite2 databases and deploy the new Cloud Run revision.
It took quite a bit of work to reach here, but from now on, automation will take over. Now, we can use this API in any number of applications that needs geolocation information. We do not have to worry about keeping the databases up to date or the scalability of the service. Cloud Run by design will spin up as many containers as required based on the demand and when there is no demand, Cloud Run will terminate all containers and scale the service down to zero. We are billed only for the time our containers are serving requests.
Thank you for reading. I hope you find this useful. I know there is a lot that can be improved. Your feedback and suggestions are very important to me. Please take a moment to leave a comment.
Top comments (0)