DEV Community

Cover image for Building and Deploying a Cloud-Native FastAPI Student Tracker App with MongoDB, Kubernetes, and GitOps
Kene Ojiteli
Kene Ojiteli

Posted on

Building and Deploying a Cloud-Native FastAPI Student Tracker App with MongoDB, Kubernetes, and GitOps

I once carried out a skills gap analysis on myself as a DevOps engineer, looking for my next challenging opportunity. I realised that although I had built smaller projects, I hadn’t yet executed a production-grade, full-blown cloud-native project that combined all the essential DevOps practices.

That became my mission: to design, deploy, and manage a FastAPI-based Student Tracker application with a MongoDB backend, but with a strong focus on DevOps functionalities rather than frontend appearance.

This project allowed me to bring together containerization, Kubernetes, Helm, CI/CD, GitOps, monitoring, and observability into one workflow.

Outline.

  • Prerequisites.
  • Key Terms & Components.
  • Step-by-Step Process.
  • Challenges and Fixes.

Prerequisites.

  • A code editor (I used VS Code).
  • A terminal.
  • Optionally, a cloud provider to provision an instance.
  • Knowledge of Docker and Kubernetes.

Key Terms & Components.

  • Docker: a tool used to build, test, and deploy applications.
  • Kubernetes: used for automating the deployment, scaling, and management of containerised applications.
  • Ingress: a resource type similar to a Kubernetes service, that allows easy routing of HTTP and HTTPS traffic entering the cluster through a single entry point to different services inside the cluster.
  • Helm & Helm charts: Helm acts as a package manager for Kubernetes; it is used to manage Kubernetes applications. Helm Charts are used to define, install, and upgrade even the most complex Kubernetes applications.
  • GitOps: a framework for managing cloud-native infrastructure and applications by using Git as the single source of truth for the desired state of your system.
  • ArgoCD (deployment platform for k8s): a Kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo)
  • Monitoring & Observability: Monitoring involves tracking known system metrics to detect when something is wrong, while observability is a deeper, more advanced capability that allows you to understand the internal state of a system by correlating logs, metrics, and traces to diagnose the why and how behind an issue.
  • Vault server: A tool that allows you to manage secrets safely. Secrets mean sensitive information, such as digital certificates, database credentials, passwords, and API encryption keys.

Step-by-Step Process.
This is a full-blown project where I update my progress in each stage, and the process includes the following phases:

  • Testing the application locally: This stage can be carried out locally, or you can leverage the use of a VM on any cloud provider (I explored both methods, but I will use an AWS EC2 instance throughout the project).

    • While provisioning the instance, I added a script to install every tool I need for the project, which includes: git, docker, kubectl, helm, kind, etc I verified the installation by checking their versions. verify version
    • I cloned the application's repository and navigated to the folder. clone repo navigate to folder
    • To test locally, I installed Python and created a virtual environment (an isolated environment on my computer, to run and test my Python app). After creating a virtual environment, I activated it and installed the necessary dependencies for my application from the requirements.txt file. install python activate_virtual_env
    • I exported my vault credentials via the CLI and ran the app with uvicorn. vault_credentials run_app
    • I accessed the application on my browser with http://<EC2_public_ip>:8000 and registered. app register-on-app
  • Containerising the application and pushing to Dockerhub: This stage involves building a Docker image through a Dockerfile and pushing to a repository (Dockerhub).

    • I created a Dockerfile, which serves as a set of instructions to build a Docker image. Dockerfile
    • I built an image from the Dockerfile and created a container (a running instance of the built image) from the image with the required credentials. build_image create_container
    • I accessed my app and successfully updated my progress. access_app update_progress success_message
    • Pushing to Dockerhub requires creating a repository on my Dockerhub account, and successful login from my CLI to my Dockerhub account before pushing to the repository. docker_login push_image dockerhub_repo dockerhub_repo
  • Setting up a Kubernetes cluster (using Kind): Here, I used Kind (a local Kubernetes cluster, which stands for Kubernetes in Docker, that uses Docker containers as nodes) to create a cluster. Working with Kind requires Docker to be installed and a Kind configuration file.
    config-file

    • I created a cluster named kene-demo-cluster that has a control plane and a worker node. k8s-cluster show-nodes show-context
  • Deploying the application to the Kubernetes cluster: I exposed my application via an ingress and created an ingress controller for my kind cluster.

    • First, I created my manifest files (namespace, secret, deployment, service and ingress files). secret deployment ingress
    • Then, I applied the configurations to a resource and created an nginx ingress controller. kubectl-apply nginx-controller
    • I retrieved all the resources I created in each namespace using the kubectl get all -n <namespace> command. my-app-resources ingress-nginx-resources
    • I accessed the application with the ingress host and updated my progress as usual. app-via-ingress update-progress success-message
  • Deploying the application with Helm: This stage focuses on deploying the student tracker application with Helm charts. I created the Helm chart from scratch.

    • I installed Helm, and I verified the installation by checking Helm's version with the helm version command. check-helm's-version
    • I created my Helm chart and navigated to the chart's directory. Note that the chart has the default structure of a typical Helm chart. create-chart
    • I deleted all the default template files and created new files to customise my chart. template-files
    • I added the Nginx ingress controller repository, updated the Helm repository and installed the Nginx ingress controller with Helm. add-update-repo install-ingress-controller
    • I went ahead to update the Chart.yaml file using my app details and my template files using the values I specified in the my-values.yaml file. Chart.yaml my-values.yaml secret.yaml template-files ingress.yaml deploy.yaml
    • Then, I installed the helm chart (I went out of the chart's directory, and specified the path of the chart's directory, the namespace the chart will be created in, and the values file to use). install-chart
    • I accessed the app and updated my progress. update-app success-message
  • Implement CI/CD with GitHub Actions: In this stage, I roleplay as a DevOps engineer (implementing CI/CD with GitHub actions to deploy my application to an EC2 instance) and a developer (adding an admin feature to the application to view the progress of all registered students).

    • To implement a CI/CD pipeline on GitHub actions, I created my workflow (a yaml file) in .github/workflows folder, added an event to trigger the pipeline, a deploy job and steps to deploy the application. pipeline
    • I also added the required credentials as secrets and referenced them where necessary in my workflow, as shown above. The credentials are the details of the instance to which I will deploy my application. secrets
    • Based on my workflow, a push event to the main branch triggers the workflow to deploy the application to an EC2 instance (in my case, it is deployed to an EC2 instance on a different AWS account). successful-run
    • I logged into the account my app was deployed to and verified the application was successfully deployed. successful-deployment
    • I used the public IP and port of the account my app was deployed to to access the app and update my progress. I ensured that my port was added to my instance security group, allowing inbound rule. access-app update-progress success-message
    • As a developer, I added an admin.html file, updated the app/crud.pyand app/main.py files. admin.html
    • I committed and pushed the changes, which triggered the workflow, and I accessed the application with http://<instance-ip>:<port>/admin. admin-path kene-progress
  • Implement GitOps with ArgoCD: I implemented GitOps with ArgoCD using Git as the single source of truth. Following best practices, this will be done in an entirely new repository (at this point, I will have an application repository and a GitOps repository).

    • I installed ArgoCD with Helm. First, I added ArgoCD to the repository, updated the repository, and configured the server to run in an insecure mode (disabling TLS/SSL and potentially other security measures) before installing it in a namespace called argocd. add-argo-to-repo argocd-insecure-mode install-argocd-with-helm
    • I verified that all resources in the argocd namespace were up and running. argocd-resources
    • I will be using the ArgoCD UI. To log in, I need to obtain the initial password and port forward (specifying --address 0.0.0.0/0, which listens on all network interfaces of the machine). You can change your password from the UI. argocd-password argocd-ui
    • Currently, ArgoCD has no record or knowledge of my app; to add my app, I will use an application YAML file (I could also add it from the UI or use the CLI). add-app-with-yaml apply-to-add-app
    • In addition to using ArgoCD as a Kubernetes controller that will monitor my Git repository for changes to application and infrastructure configurations, I will also create a workflow to build and push a Docker image to Dockerhub, such that the push will update the Helm values.yaml file with a new image repository and tag, and ArgoCD auto-syncs the commit.
    • I created a repository and a personal access token in my Docker Hub account, then added the PAT and my username as secrets in my project's repo. dockerhub-repo repo-secrets
    • I triggered my workflow by pushing changes to my main branch. My workflow will check out my repo, log in to Docker Hub, build the image, scan the image with trivy (a scanning tool), push the scanned image to my Docker Hub account, update the image tag and push the new update to GitHub. cd-workflow new-image old-tag new-tag
    • The commit step from the pipeline above causes ArgoCD to auto-sync with the Git repo. port-forward argo-cd-auto-sync
  • Implement Monitoring with the LGTP (Loki, Grafana, Tempo and Prometheus) stack: in this stage, I modified my source code to allow metrics scraping by Prometheus, and deployed monitoring tools to be monitored by argocd and also created the monitoring workloads in a different namespace. I defined Argo CD Applications that point to the Helm charts.

    • I used the app of apps pattern where a single, parent ArgoCD application resource manages other child application resources (instead of manually deploying each application), which then manages the actual Kubernetes workloads. app-of-apps
    • I port-forwarded to argocd to view the applications created by the single parent application with kubectl -n <namespace> port-forward <argocd-service-name> 8000:80 --address 0.0.0.0. monitoring-stack
    • Here are my complete applications, which consist of: kube-prometheus-stack (which includes Prometheus, node exporter and Grafana), Tempo, Loki and my student-tracker application. argocd-deployment
    • I port-forwarded to access Grafana using kubectl -n <namespace> port-forward <grafana-service-name> 3000:80 --address 0.0.0.0 command. Grafana
    • I added an extra configuration to add Tempo, Loki and Prometheus as my datasources automatically. data-sources
    • I went ahead to test the data sources and create dashboards. api-test grafana-dashboard metrics
  • NB: I made sure to allow inbound traffic for all the ports used in my EC2 instance security group.

  • Challenges and Fixes: working on this project exposed me to a whole lot of errors, most of which I was able to resolve after so much research.

    • Error 1: I had issues accessing my application via the browser, and I debugged by doing an nmap scan to see my open and closed ports; it turned out my ports were closed except port 22. nmap-scan
    • I patched my deployment to kubectl patch deployment ingress-nginx-controller -n ingress-nginx -p '{"spec":{"template":{"spec":{"hostNetwork":true}}}}', this is because the kind cluster runs in a container, so exposing ports to the EC2 host and beyond won't work unless the pod is directly attached to the host's network. patch-service-type open-ports
    • Error 2: I had issues accessing my application on the argocd UI This was caused because by default, helm fetches values from the values.yaml file, but I have a custom file named my-values.yaml. nil-pointer-error nil-pointer-error
    • I fixed this error by adding default values to the default values.yaml file and specifying the exact Helm file to be used to deploy the application to ArgoCD. helm-value-file
    • Error 3: I had a module not found error on my student tracker log on Argocd. This was caused because of a wrong file path in my Dockerfile because I rearranged my application's directory and failed to change the file path. module-not-found
    • I fixed this error by modifying the file to make it easier for Python to look for the module. change-path-on-dockerfile
  • Conclusion: This project was a turning point in my DevOps journey. By building a cloud-native FastAPI Student Tracker and deploying it with Docker, Kubernetes, Helm, CI/CD, GitOps, and monitoring, I gained hands-on experience with the full DevOps lifecycle. It taught me:

    • How to design end-to-end workflows from local testing to production-ready deployments.
    • How GitOps principles simplify cluster management with ArgoCD.
    • How monitoring (logs, metrics, and traces) ties DevOps and observability together.
    • That errors are part of the process, debugging taught me more than smooth deployments ever could.

GitHub Repositories: Application repo, GitOps repo

Top comments (0)