I once carried out a skills gap analysis on myself as a DevOps engineer, looking for my next challenging opportunity. I realised that although I had built smaller projects, I hadn’t yet executed a production-grade, full-blown cloud-native project that combined all the essential DevOps practices.
That became my mission: to design, deploy, and manage a FastAPI-based Student Tracker application with a MongoDB backend, but with a strong focus on DevOps functionalities rather than frontend appearance.
This project allowed me to bring together containerization, Kubernetes, Helm, CI/CD, GitOps, monitoring, and observability into one workflow.
Outline.
- Prerequisites.
- Key Terms & Components.
- Step-by-Step Process.
- Challenges and Fixes.
Prerequisites.
- A code editor (I used VS Code).
- A terminal.
- Optionally, a cloud provider to provision an instance.
- Knowledge of Docker and Kubernetes.
Key Terms & Components.
- Docker: a tool used to build, test, and deploy applications.
- Kubernetes: used for automating the deployment, scaling, and management of containerised applications.
- Ingress: a resource type similar to a Kubernetes service, that allows easy routing of HTTP and HTTPS traffic entering the cluster through a single entry point to different services inside the cluster.
- Helm & Helm charts: Helm acts as a package manager for Kubernetes; it is used to manage Kubernetes applications. Helm Charts are used to define, install, and upgrade even the most complex Kubernetes applications.
- GitOps: a framework for managing cloud-native infrastructure and applications by using Git as the single source of truth for the desired state of your system.
- ArgoCD (deployment platform for k8s): a Kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo)
- Monitoring & Observability: Monitoring involves tracking known system metrics to detect when something is wrong, while observability is a deeper, more advanced capability that allows you to understand the internal state of a system by correlating logs, metrics, and traces to diagnose the why and how behind an issue.
- Vault server: A tool that allows you to manage secrets safely. Secrets mean sensitive information, such as digital certificates, database credentials, passwords, and API encryption keys.
Step-by-Step Process.
This is a full-blown project where I update my progress in each stage, and the process includes the following phases:
-
Testing the application locally: This stage can be carried out locally, or you can leverage the use of a VM on any cloud provider (I explored both methods, but I will use an AWS EC2 instance throughout the project).
- While provisioning the instance, I added a script to install every tool I need for the project, which includes: git, docker, kubectl, helm, kind, etc I verified the installation by checking their versions.
- I cloned the application's repository and navigated to the folder.
- To test locally, I installed Python and created a virtual environment (an isolated environment on my computer, to run and test my Python app). After creating a virtual environment, I activated it and installed the necessary dependencies for my application from the
requirements.txt
file. - I exported my vault credentials via the CLI and ran the app with
uvicorn
. - I accessed the application on my browser with
http://<EC2_public_ip>:8000
and registered.
- While provisioning the instance, I added a script to install every tool I need for the project, which includes: git, docker, kubectl, helm, kind, etc I verified the installation by checking their versions.
-
Containerising the application and pushing to Dockerhub: This stage involves building a Docker image through a Dockerfile and pushing to a repository (Dockerhub).
- I created a Dockerfile, which serves as a set of instructions to build a Docker image.
- I built an image from the Dockerfile and created a container (a running instance of the built image) from the image with the required credentials.
- I accessed my app and successfully updated my progress.
- Pushing to Dockerhub requires creating a repository on my Dockerhub account, and successful login from my CLI to my Dockerhub account before pushing to the repository.
- I created a Dockerfile, which serves as a set of instructions to build a Docker image.
-
Setting up a Kubernetes cluster (using Kind): Here, I used Kind (a local Kubernetes cluster, which stands for Kubernetes in Docker, that uses Docker containers as nodes) to create a cluster. Working with Kind requires Docker to be installed and a Kind configuration file.
- I created a cluster named
kene-demo-cluster
that has a control plane and a worker node.
- I created a cluster named
-
Deploying the application to the Kubernetes cluster: I exposed my application via an ingress and created an ingress controller for my kind cluster.
- First, I created my manifest files (namespace, secret, deployment, service and ingress files).
- Then, I applied the configurations to a resource and created an
nginx ingress controller
. - I retrieved all the resources I created in each namespace using the
kubectl get all -n <namespace>
command. - I accessed the application with the ingress host and updated my progress as usual.
- First, I created my manifest files (namespace, secret, deployment, service and ingress files).
-
Deploying the application with Helm: This stage focuses on deploying the student tracker application with Helm charts. I created the Helm chart from scratch.
- I installed Helm, and I verified the installation by checking Helm's version with the
helm version
command. - I created my Helm chart and navigated to the chart's directory. Note that the chart has the default structure of a typical Helm chart.
- I deleted all the default template files and created new files to customise my chart.
- I added the Nginx ingress controller repository, updated the Helm repository and installed the Nginx ingress controller with Helm.
- I went ahead to update the
Chart.yaml
file using my app details and my template files using the values I specified in themy-values.yaml
file. - Then, I installed the helm chart (I went out of the chart's directory, and specified the path of the chart's directory, the namespace the chart will be created in, and the values file to use).
- I accessed the app and updated my progress.
- I installed Helm, and I verified the installation by checking Helm's version with the
-
Implement CI/CD with GitHub Actions: In this stage, I roleplay as a DevOps engineer (implementing CI/CD with GitHub actions to deploy my application to an EC2 instance) and a developer (adding an admin feature to the application to view the progress of all registered students).
- To implement a CI/CD pipeline on GitHub actions, I created my workflow (a yaml file) in
.github/workflows
folder, added an event to trigger the pipeline, a deploy job and steps to deploy the application. - I also added the required credentials as secrets and referenced them where necessary in my workflow, as shown above. The credentials are the details of the instance to which I will deploy my application.
- Based on my workflow, a push event to the main branch triggers the workflow to deploy the application to an EC2 instance (in my case, it is deployed to an EC2 instance on a different AWS account).
- I logged into the account my app was deployed to and verified the application was successfully deployed.
- I used the public IP and port of the account my app was deployed to to access the app and update my progress. I ensured that my port was added to my instance security group, allowing inbound rule.
- As a developer, I added an
admin.html
file, updated theapp/crud.py
andapp/main.py
files. - I committed and pushed the changes, which triggered the workflow, and I accessed the application with
http://<instance-ip>:<port>/admin
.
- To implement a CI/CD pipeline on GitHub actions, I created my workflow (a yaml file) in
-
Implement GitOps with ArgoCD: I implemented GitOps with ArgoCD using Git as the single source of truth. Following best practices, this will be done in an entirely new repository (at this point, I will have an application repository and a GitOps repository).
- I installed ArgoCD with Helm. First, I added ArgoCD to the repository, updated the repository, and configured the server to run in an insecure mode (disabling TLS/SSL and potentially other security measures) before installing it in a namespace called argocd.
- I verified that all resources in the argocd namespace were up and running.
- I will be using the ArgoCD UI. To log in, I need to obtain the initial password and port forward (specifying --address 0.0.0.0/0, which listens on all network interfaces of the machine). You can change your password from the UI.
- Currently, ArgoCD has no record or knowledge of my app; to add my app, I will use an application YAML file (I could also add it from the UI or use the CLI).
- In addition to using ArgoCD as a Kubernetes controller that will monitor my Git repository for changes to application and infrastructure configurations, I will also create a workflow to build and push a Docker image to Dockerhub, such that the push will update the Helm values.yaml file with a new image repository and tag, and ArgoCD auto-syncs the commit.
- I created a repository and a personal access token in my Docker Hub account, then added the PAT and my username as secrets in my project's repo.
- I triggered my workflow by pushing changes to my main branch. My workflow will check out my repo, log in to Docker Hub, build the image, scan the image with trivy (a scanning tool), push the scanned image to my Docker Hub account, update the image tag and push the new update to GitHub.
- The commit step from the pipeline above causes ArgoCD to auto-sync with the Git repo.
- I installed ArgoCD with Helm. First, I added ArgoCD to the repository, updated the repository, and configured the server to run in an insecure mode (disabling TLS/SSL and potentially other security measures) before installing it in a namespace called argocd.
-
Implement Monitoring with the LGTP (Loki, Grafana, Tempo and Prometheus) stack: in this stage, I modified my source code to allow metrics scraping by Prometheus, and deployed monitoring tools to be monitored by argocd and also created the monitoring workloads in a different namespace. I defined Argo CD Applications that point to the Helm charts.
- I used the
app of apps
pattern where a single, parent ArgoCD application resource manages other child application resources (instead of manually deploying each application), which then manages the actual Kubernetes workloads. - I port-forwarded to argocd to view the applications created by the single parent application with
kubectl -n <namespace> port-forward <argocd-service-name> 8000:80 --address 0.0.0.0
. - Here are my complete applications, which consist of: kube-prometheus-stack (which includes Prometheus, node exporter and Grafana), Tempo, Loki and my student-tracker application.
- I port-forwarded to access Grafana using
kubectl -n <namespace> port-forward <grafana-service-name> 3000:80 --address 0.0.0.0
command. - I added an extra configuration to add Tempo, Loki and Prometheus as my datasources automatically.
- I went ahead to test the data sources and create dashboards.
- I used the
NB: I made sure to allow inbound traffic for all the ports used in my EC2 instance security group.
-
Challenges and Fixes: working on this project exposed me to a whole lot of errors, most of which I was able to resolve after so much research.
-
Error 1: I had issues accessing my application via the browser, and I debugged by doing an nmap scan to see my open and closed ports; it turned out my ports were closed except port 22.
- I patched my deployment to
kubectl patch deployment ingress-nginx-controller -n ingress-nginx -p '{"spec":{"template":{"spec":{"hostNetwork":true}}}}'
, this is because the kind cluster runs in a container, so exposing ports to the EC2 host and beyond won't work unless the pod is directly attached to the host's network. -
Error 2: I had issues accessing my application on the argocd UI This was caused because by default, helm fetches values from the
values.yaml
file, but I have a custom file namedmy-values.yaml
. - I fixed this error by adding default values to the default
values.yaml
file and specifying the exact Helm file to be used to deploy the application to ArgoCD. -
Error 3: I had a module not found error on my student tracker log on Argocd. This was caused because of a wrong file path in my Dockerfile because I rearranged my application's directory and failed to change the file path.
- I fixed this error by modifying the file to make it easier for Python to look for the module.
-
Error 1: I had issues accessing my application via the browser, and I debugged by doing an nmap scan to see my open and closed ports; it turned out my ports were closed except port 22.
-
Conclusion: This project was a turning point in my DevOps journey. By building a cloud-native FastAPI Student Tracker and deploying it with Docker, Kubernetes, Helm, CI/CD, GitOps, and monitoring, I gained hands-on experience with the full DevOps lifecycle. It taught me:
- How to design end-to-end workflows from local testing to production-ready deployments.
- How GitOps principles simplify cluster management with ArgoCD.
- How monitoring (logs, metrics, and traces) ties DevOps and observability together.
- That errors are part of the process, debugging taught me more than smooth deployments ever could.
GitHub Repositories: Application repo, GitOps repo
Top comments (0)