β οΈ Disclaimer: The Blog app is not my application. The MERN blog app was cloned from GitHub. My goal here was to practice DevOps with a full CI/CD workflow code quality checks, security audits, containerization, Kubernetes deployment, and monitoring.
π§± App Overview
- Frontend: React (served via Nginx)
- Backend: Node.js + Express
- Database: MongoDB
- CI/CD: GitHub Actions
- Container Registry: DockerHub
- Deployment: Kubernetes (AKS/EKS)
- Monitoring: Prometheus + Grafana
- Security/Code Quality: SonarQube + Trivy + ESLint
π§ Goals
β
Set up CI/CD pipelines
β
Apply security scanning tools
β
Push images to DockerHub
β
Deploy automatically to Kubernetes
β
Monitor workloads
β
Do all this with free-tier constraints (Azure/AWS)
π οΈ CI Workflow β GitHub Actions
β What We Did
- Lint both frontend & backend with ESLint
- Code Quality Check with self-hosted SonarQube
- Vulnerability Scanning with Trivy
- Build & Push Docker Images to DockerHub
βοΈ SonarQube Setup (Azure & AWS)
- Launch a Linux VM
- Allow port 9000 in inbound firewall rules
- Add your user to Docker group:
sudo usermod -aG docker $USER
- Start SonarQube container:
docker run -d --name sonarqube -p 9000:9000 sonarqube
- Access:
http://<VM_PUBLIC_IP>:9000
- Generate token β Save in GitHub Secrets:
SONAR_TOKEN
SONAR_HOST_URL
βοΈ CD Workflow β GitHub Actions
We created a second GitHub Actions workflow that triggers only after CI succeeds using workflow_run
.
π§βπ» AKS Setup
- Use
az login
, create AKS via CLI or portal - Create a Service Principal
- Store creds in GitHub Secret:
AZURE_CREDENTIALS
- In GitHub Action:
az aks get-credentials --name blog-cluster --resource-group blogdeploy
kubectl apply -f deploy/base/
π§βπ» EKS Setup (Alternative)
- Create EKS with
eksctl
:
eksctl create cluster --name blog-cluster ...
- Configure AWS access:
- Store
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
-
Use:
- uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} region: us-east-1
- Add kubeconfig for EKS:
aws eks update-kubeconfig --name blog-cluster
- Then apply:
kubectl apply -f deploy/base/
β Full CD YAML Snippet
on:
workflow_run:
workflows: ["CI-Ffor blog"]
types: [completed]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Set AKS Context
run: |
az aks get-credentials --resource-group blogdeploy --name blog-cluster --overwrite-existing
- name: Deploy
run: kubectl apply -f deploy/base/
π Kubernetes Setup
We used manifest-based deployment instead of Helm.
- MongoDB
- blog-server (Node)
- blog-client (React + Nginx)
- Services (ClusterIP + LoadBalancer)
For full YAMLs:
π GitHub Repo: https://github.com/Harivelu0/Blog-App-using-MERN-stack
π¦ Docker & Image Strategy
Both client and server had custom Dockerfile
.
In CI, we built and pushed them:
docker build -t <username>/blog-client ./client
docker push <username>/blog-client
Great catch! Here's the ArgoCD-specific flow you can insert into your diagram and blog under the "CD & GitOps" section.
π GitOps Deployment with ArgoCD
Once the CI pipeline builds and pushes Docker images and updates the manifests in the GitHub repo (deploy/base/
), ArgoCD handles the automated syncing and deployment into your Kubernetes cluster.
πΉ Flow:
- ArgoCD App Definition
- Points to your GitHub repo and
deploy/base/
folder. -
Deployed using:
kubectl apply -f argocd-app.yaml
- ArgoCD Syncs Automatically
- Detects changes in the GitHub repo (new images, manifest edits).
- Applies updates to the AKS/EKS cluster without manual
kubectl apply
.
- ArgoCD UI
-
Accessed via port-forward:
kubectl port-forward svc/argocd-server -n argocd 8080:443
-
Login with:
- Username:
admin
- Password: from initial pod logs or custom setup
- Username:
- Health Monitoring
- ArgoCD continuously monitors health and sync status of the app.
- If app is OutOfSync or Degraded, it provides logs and status for debugging.
π Monitoring
Installed Prometheus + Grafana via Helm:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace
Port-forward locally:
kubectl port-forward svc/grafana -n monitoring 3001:80
kubectl port-forward svc/prometheus-server -n monitoring 9090:80
β οΈ No alerts configured in this project due to focus/time β will cover alerting in a future one.
Final DAeployed App in K8
π§ͺ Common Issues We Solved
Issue | Fix |
---|---|
App blank on root | Removed homepage from package.json and rebuilt |
ErrImageNeverPull |
Set imagePullPolicy: Always
|
SonarQube not reachable | Opened port 9000 on VM |
Port limitations on Azure | Used port-forwarding instead of multiple LoadBalancers |
Kustomization error | Installed required CRDs for Kustomize |
π§Ή Cleanup
Azure
az aks delete --name blog-cluster --resource-group blogdeploy --yes
az group delete --name blogdeploy --yes
AWS
eksctl delete cluster --name blog-cluster
Also delete:
- DockerHub images
- SonarQube VM
- GitHub Secrets
β Final Summary
β
CI: ESLint + SonarQube + Trivy + Docker
β
CD: GitHub Actions to AKS (or EKS)
β
Monitoring: Prometheus + Grafana
β
K8s: Manifest-based setup
β
Port-forwarding to work around public IP limits
β Alerting skipped for now (to be covered in future project)
Top comments (1)
Amazing! π