Tech bros,
We meet again!
I’ve just got my hands dirty lately with something fun and lowkey production-ish: I built 🔧 a simple Python system monitoring web app, containerized it with Docker, shipped it to AWS ECR that was provisioned using Python's Boto3, spun up an EKS cluster with Terraform, deployed it with kubectl, and wired up ArgoCD for full-on GitOps DevOps Operations.
Whole thing got me feeling like a software engineer 🧑💻👷🏿♂️👷🏿♂️ now, sike! Because i am one now! 🙂↕️🙂↕️
Yeah, Thats right!! my local laptop is basically my mini DevOps playground right now. 🤣🤣
Let's dive into this with phases.
Here is an Architecture diagram i crafted from draw.io, use it as a flow reference.
Phase 1 : My Python Flask Monitoring App
So, let’s start at the very beginning.
I wrote a Python web app that uses psutil, it grabs CPU load, memory usage, disk usage, all that good system info, and displays it with Flask, wrapped with some plain HTML + CSS.
Lightweight, simple, but does the job. ✅✅
Phase 2: Containerize Everything
Next step: Spun up a docker desktop in my local machine,
Whgipped out a Dockerfile with the following config:
FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV FLASK_RUN_HOST=0.0.0.0
EXPOSE 5000
CMD ["flask", "run"]
Built the image, spun up the container locally aaand boom!! My app was alive on localhost:5000.
That moment had me smiling like a goat 🐐🐐 cuz I'm the goat!. (see what i did there? 😏).
Phase 3: Pushed the image to AWS ECR
Next up, I didn’t want this image sitting only on my laptop. So I had to put it somewhere, and for some random reason, i thought of google searching other ways to provision resources, i stumbled on boto3, and decided to give it a try, i used boto3 in Python to build out an ECR repository in my AWS account.
import boto3 ## i got it from here, you can also leearn more from here https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecr/client/create_repository.html
ecr_client = boto3.client('ecr')
repository_name = "my-cloud-app-repo"
response = ecr_client.create_repository(repositoryName=repository_name)
repository_uri = response['repository']['repositoryUri']
print(repository_uri)
# I just attempted to do a little with it, and have an idea of how it works, i still prefer my terraform approach, as this requires a lot of API calls, which may delay or throw me off the plan.
Logged in with the CLI, tagged my image, pushed it up to ECR — now my container lives in the cloud where it belongs.
docker push
and aws ecr get-login-password
were my witnesses.
so far; Python ==> Docker ==> ECR. ✅✅
Stay with me now!! dont go anywhere, keep scrolling!!! 😠
Next up: Part 2! Terraform, EKS and Deployments!!.
💡 Why I’m doing this:
I’m not just playing around!! I actually want recruiters, hiring managers, or whoever’s reading to see that I know how to build something end-to-end.
I understand how code moves from my IDE ➡️ to a container ➡️ to a registry ➡️ to Kubernetes ➡️ and gets managed with GitOps.
If you’re gonna call yourself a Cloud/DevOps engineer, you can talk the talk, but you better walk the talk. So I’m walking it, one cluster at a time. (Admit it, I'm sleek with it)😏😏
Part 2: Terraform + EKS: Bringing My non-existent Cluster to Life
Alright, once my ECR image was chilling safely in AWS, it was time to spin up the big boys playground! "An EKS cluster", the big guns for container orchestration.
No point clicking around in the AWS console, that’s rookie moves.
I went full Infrastructure-as-Code with Terraform, because real engineers automate repeatable pain, not just deployments.
The big boy tools ⚔️⚔️
I wrote my main.tf
with two key modules:
The VPC module → pulled from the official terraform-aws-modules/vpc
# VPC MODULE, you can get it from thee official VPC config
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.8.1"
name = "eks-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
tags = {
Name = "eks-vpc"
}
}
The EKS module → terraform-aws-modules/eks
# EKS MODULE: likewise this one too, it pulls from its official EKS config
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "20.13.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_public_access = true
# Link to VPC module output
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
default = {
desired_capacity = var.desired_capacity
max_capacity = var.max_capacity
min_capacity = var.min_capacity
instance_types = var.node_instance_types
}
}
My VPC config gave me 3 AZs, private & public subnets, NAT Gateway; you know, the usual building blocks to keep traffic flowing but tight.
Then the EKS module did the heavy lifting:
It spun up the control plane, worker nodes, IAM roles, the whole shebang.
*** RBAC Headaches 🥀 & IAM Headbutts 💔***
Here’s where the "fun" started >> RBAC.
As you can see, I’m an IAM user (Ak_DevOps), and Kubernetes does not care if your IAM user has AdminAccess in AWS. Cold and Ruthless, it didnt even care that I hadn't eaten while working on this project. 💔💔
K8s RBAC is its own beast. 🧛🧛
So there I was, cluster up, kubectl get nodes… Access Denied.
Can’t list nodes. Can’t touch aws-auth. Nothing. 😫😥
The Fix? The Right AccessEntry!
First, I tried to wire up system:masters
.
AWS EKS: “Nah bro, system
: prefixes are off-limits.”
Cool cool cool.
So I fixed the Terraform config: instead of shoehorning system:masters
, I used an AWS-provided cluster policy AmazonEKSClusterAdminPolicy
and associated it properly in access_entries
.
A couple terraform apply
runs later, my user got mapped, RBAC was finally happy, kubectl get nodes
→ works. We were good!! 🤝🤝
Sometimes it’s not about who you are, but what policy ARN you carry. 😶🌫️😶🌫️Lesson learned.
Phase 4: Deploy My App!
With the cluster breathing, it was time to launch my container in its new home.
✅ I wrote deployment.yaml
→ pointed it to my ECR image.
✅ I wrote service.yaml
→ exposed it as a LoadBalancer
on AWS.
✅ Ran kubectl apply -f
→ watched pods spin up, nodes pull my image, service get a public IP.
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-monitoring-app
labels:
app: python-monitoring-app
spec:
replicas: 2 # Running 2 copies so it’s reliable and handles load
selector:
matchLabels:
app: python-monitoring-app
template:
metadata:
labels:
app: python-monitoring-app
spec:
containers:
- name: monitoring-app
image: 194722436853.dkr.ecr.eu-central-1.amazonaws.com/my-cloud-app-repo:latest # My ECR image URL
ports:
- containerPort: 5000 # The port my Flask app listens on inside the container
env:
- name: FLASK_RUN_HOST
value: "0.0.0.0" # Make flask listen on all interfaces, not just localhost
resources:
requests:
memory: "128Mi" # I added these section, as i want to define the constraint of minimum resources which should be reserve for this container
cpu: "100m"
limits:
memory: "256Mi" # maxx resources container can use so it doesnnt hog a lot of cluster memories and slow down the system!!!
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: python-monitoring-service
spec:
selector:
app: python-monitoring-app # tHIS will liink service to pods with this label
ports:
- protocol: TCP
port: 80 # External port people hit to reach the app
targetPort: 5000 # Internal port your app listens on (fix from 8080)
type: LoadBalancer # AWS will spin up a real load balancer for me
and my service.yaml:
apiVersion: v1
kind: Service
metadata:
name: python-monitoring-service
spec:
type: LoadBalancer
selector:
app: python-monitoring-app
ports:
- protocol: TCP
port: 80 # external port
targetPort: 5000 # container port
Boom! my flask app, alive on the internet, powered by EKS.
One tiny Python script, now scaling in the cloud.
Upcoming: Part 3!! ArgoCD — GitOps or Go Home!
So far:
✅ Python Flask app
✅ Docker + ECR
✅ Terraform VPC + EKS
✅ Deployed via kubectl
Next up: ArgoCD.
How I turned this into a proper GitOps pipeline!! fully automated, push code ➡️ watch the cluster sync ➡️ self-healing infra.
And oh, the Dex server meltdown on t3.small
?
Yeah… the late-night detective story deserves its own spotlight.
Part 3!! GitOps the Smart Way: ArgoCD + the Dex Saga
So my Python Flask app was up on EKS. Cool.
But let’s be real! kubectl apply manually? Nah.
This is 2025, not 2015😑.
I wanted proper GitOps! So i needed ArgoCD to watch my Git repo like a hawk and sync changes automatically.
Push code → Argo picks it → deploys → cluster stays true to Git🤞🤞.
Clean, declarative, bulletproof.
Talk about ArgoCD Loyalty to Git!! Not sure you can relate 🤣🤣.
Installaing ArgoCD
I spun up a new argocd
namespace:
kubectl create namespace argocd
Then installed ArgoCD with the official manifests:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
All good, riiiight?
So far so good.
Pods popped up: Application Controller, Repo Server, API Server…
But then came Dex!! ArgoCD’s SSO engine. I call it, "The Humbler" because, boy was i humbled! 😭😭
Dex: The Tiny Pod that Broke my Night🤬🤬
The argocd-dex-server
kept dying. CrashLoopBackOff
.
Logs? Useless. Google? Meh!!,
AWS forum? All were dead ends!!
Reddit? I found only one person who had the issue one year ago, but he was never answered not resolved, i guess he quitted half-way. Totally excusable!. I started to crash out. I've come too far to fail😭😭
I doubled my nodes: desired_capacity = 3
.
No dice.
kubectl describe
→ oom killed.
Then I did what DevOps folks do when docs fail I took my phone and dialed up Claude (👀).
Solution? Bigger Instances.
Claude said:
“Your t3.small
nodes don’t have enough RAM. Dex needs more headroom.”
Fair.
So i tweaked my Terraform:
instance_types = ["t3.medium"]
Re-applied. Watched the nodes drain and come back bigger.
Dex? Back to life instantly😮💨😮💨.
Sometimes more RAM fixes everything 😮💨😮💨.
Expose ArgoCD the smart wayy
By default, ArgoCD’s argocd-server
is a ClusterIP — internal only.
So I patched it to a LoadBalancer:
kubectl edit svc argocd-server -n argocd
Changed type: ClusterIP
→ type: LoadBalancer
.
A shiny new ELB spun up — now my ArgoCD UI was live!!!
Admin Login
Got the admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Loggined in as admin
. Changed the passsword.
Safe and sound!!
Connect to GitHub! The GitOps Cycle or Loop, or whatever you want to call it.
I didn’t want to click through the UI to add the repo, because that will reduce the little aura i had left from dex issue 🥲🥲, so I did it the proper way:
- Wrote an
argo-app.yaml
config that points to my GithUb Repo:
# This defines my ArgoCD app: pulls manifests from my GitHub repo, syncs to my EKS cluster, you can also do it console or manual approach too.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-cloud-monitoring-app # This is the name of my ArgoCD app
namespace: argocd # Must match ArgoCD namespace we created earlier
spec:
project: default
source:
repoURL: 'https://github.com/AkingbadeOmosebi/my-cloud-monitoring-app' # My GitHub repo
targetRevision: HEAD # Branch to track (HEAD = default branch)
path: manifests # Path to my k8s manifests (deployment and service.yaml) folder inside repo
destination:
server: 'https://kubernetes.default.svc' # EKS cluster endpoint inside ArgoCD
namespace: default
syncPolicy: # This is where it syncs every deployyment
automated:
prune: true # Remove old resources if not in Git anymore
selfHeal: true # Revert drift automatically,
Applied it:
kubectl apply -f argo-app.yaml
Watched ArgoCD pick up my
deployment.yaml
&service.yaml
→ deploy my app → match desired state.
I could feel a different sensation, it was Aura. Aura was rising from everywhere. 😎😎
The Moment of Truth: Scaling!!
I tested it live:
-Pushed an update to replicas: 2 → Argo synced.
-Changed it to 4 → Argo synced.
-Pushed 6 → Argo synced and spun up 6 pods, just like that.
This is how CD should feel: Hands off, Git is the source of truth, Argo enforces it.
Such a beautiful relationship🥹🥹. I wish mine was like that too 🥲🥲, couldn't even wait for me to scale up 💔🥀.
Anyways, it is what it is!.
Final Thoughts
So yeah, I built and turned:
A Python Flask app
Containerized with Docker
Pushed to ECR
Deployed to EKS (Infra-as-Code with Terraform)
Managed through ArgoCD
Into a real Continuous Deployment pipeline — all my public Git.
Big lessons for recruiters, tech leads and tech ethusiasts reading this:
I don’t just build, I debug.
I don’t fear YAML, I automate with it.
I know how infra works from IAM quirks to cluster IPs.
And when things break at 2AM? I figure it out, and document it so the next version is better and sleep at 3:30AM 😃😃.
If you read this far, just pause and imagine. Imagine what I can do for your team when it’s not just me in the dark with Dex. 🔥
Outro or Perhaps What’s Next?
This wasn’t just another “Hello World on Kubernetes”.
I built, broke, fixed, tuned, automated, scaled, then wrapped it all in GitOps so it runs itself.
And I made mistakes on purpose (well… some😅) so I could really understand what’s happening under the hood 🌚.
Next up?
Templating my manifests with Helm.
Adding Ingress with cert-manager and SSL.
-Maybe wire up Prometheus & Grafana to watch my app’s real CPU + RAM usage.
- And of course, more Terraform modules to make this repeatable for any project.
Why Does This Matter?
I’m not here to memorize commands, I understand why they break.
I’m not scared of IAM, K8s RBAC, or AWS networking.
I automate the boring stuff so I can focus on shipping value.
##Let’s Connect 💬
I’m open to DevOps, Platform Engineering, SRE or Cloud Native roles where:
✅ Cloud + K8s + Terraform are the daily bread
✅ GitOps, automation & CI/CD aren’t just buzzwords
✅ And people actually share what they learn
If this sounds like your kind of crew, lets talk.
Or just drop a comment to geek out about EKS, ArgoCD, or your weirdest CrashLoopBackOff
story. I love hearing them all.
📌 Check the full project repo: my-cloud-monitoring-app
🔗 Connect with me on Linkedin
Top comments (0)