DEV Community

Mohammed Anjum
Mohammed Anjum

Posted on • Originally published at anjum-py.github.io

Code Walkthrough: Building Geolocation API and Automating Deployment to Cloud Run

Introduction

This article serves as an introduction to my new youtube video. In this video, I'll take you through my journey of creating an API for geolocation information based on IP addresses. But what makes this project even more exciting is the focus on automating the deployment process using Cloud Development Kit for Terraform and bash script to serverless cloud-native infrastructure, specifically Google Cloud's Cloud Run.

Before we dive into the details, I want to let you know that the code for this project is open-source and available in the accompanying GitHub repository. I highly encourage you to explore the code and provide your valuable feedback. Your suggestions and recommendations will be instrumental in improving the quality of my code.

In this video, my main focus will be on walking you through the source code, explaining how different components are structured, and showcasing their interactions. If you're interested in learning how to deploy the Geolocation API in your own Google Cloud Project, I recommend checking out "Deploying Geolocation API". The current video is more about understanding the source code and how the different pieces fit together.

Project Overview

Let's start with an overview of the project.

The geolocation-api project comprises five distinct components:

  1. FastAPI Application: The FastAPI application is responsible for handling web requests and retrieving geolocation data from MaxMind GeoLite2 databases. Along with the FastAPI python web framework, it has a few dependencies including geoip2. The application includes middleware components for TrustedHost and CORS handling. It also defines endpoints for health checks and IP geolocation lookup.
  2. Dockerfile: The Dockerfile employs a multi-stage build approach. In the first stage, it creates a configuration file required by geoipupdate program and downloads GeoLite2 MaxMind databases. In the second stage, it uses the Python slim-buster image, installs dependencies, and sets up the python virtual environment. In the third and final stage, it simply brings everything together.
  3. Cloudbuild.yaml: This file defines the steps for build pipeline for Google Cloud's Cloud Build service. It consists of steps for copying the .env file, running tests, building and pushing the Docker image, and updating the Cloud Run service with a new revision.
  4. CDKTF Application: CDKTF is used to define and provision the cloud infrastructure required for our API. Our CDKTF application has three stacks. The base stack that enables required Google Cloud APIs and creates a bucket to store our .env file. The pre-cloudrun stack creates a dedicated service account for Cloud Build pipeline, creates Artifact Registry to store Docker container images, sets up Cloud Build trigger for manual invocation, and creates a Cloud Scheduler job to automatically trigger a weekly build of our image. The cloudrun stack creates a dedicated service account to be used with our Cloud Run service, then creates Cloud Run service using the image we built using Cloud Build, and configures the service to be available for everyone.
  5. deploy.sh Shell Script: The deploy.sh script automates the set up of the Cloud Shell environment by installing required components such as required version of Python, Poetry for managing virtual environment, CDKTF python bindings, and cdktf-cli npm package and then runs our cdktf deploy commands to deploy cloud resources.

I hope this overview has given you a good understanding of the project's structure and components. In the video, I will provide a more detailed walkthrough of each piece, explaining the code and highlighting essential concepts.

Make sure to watch the video until the end. Thank you for joining me on this journey. Please don't hesitate to leave comments or reach out to me directly. Let's get started!"

Top comments (0)