Introduction
In modern DevOps workflows, deploying applications is no longer just about writing code—it’s about automating, scaling, and managing services efficiently.
In this project, I built a 2-tier microservices application and deployed it using:
Docker for containerization
Kubernetes for orchestration
Minikube for local cluster
Helm for production-style deployment
The goal was to simulate a real-world DevOps workflow from scratch.
Architecture Overview
This project follows a simple microservices pattern:
User → Frontend (Nginx) → Backend Service → Backend Pods
- Frontend acts as a reverse proxy
- Backend serves API responses
- Kubernetes handles scaling and communication
Tech Stack
- Docker
- Kubernetes
- Minikube
- Helm
- Python (Flask)
- Nginx
Project Structure
project/
├── backend/
├── frontend/
└── helm/
Steps to Execute the project:
Step 1: Build the Backend (Python API)
We start with a simple Flask API:
app.py:
from flask import Flask
import socket
app = Flask(__name__)
@app.route("/")
def home():
return f"Hello from Backend! Host: {socket.gethostname()}"
@app.route("/health")
def health():
return "OK"
requirements.txt:
flask
gunicorn
Step 2: Dockerize the Application
Backend Dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["gunicorn", "-b", "0.0.0.0:5000", "app:app"]
Build:
docker build -t backend-app ./backend
Step 3: Frontend with Nginx
Nginx forwards requests to the backend:
events {}
http {
server {
listen 80;
location / {
proxy_pass http://backend-service:5000;
}
}
}
Frontend Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
Build frontend:
docker build -t frontend-app ./frontend
Step 4: Run Kubernetes Cluster
Start Minikube:
minikube start
Use Minikube Docker:
Since we're using Minikube, we must build images inside Minikube’s Docker environment.
& minikube -p minikube docker-env --shell powershell | Invoke-Expression
Rebuild images inside the cluster.
Step 5: Create Helm Chart Structure
Instead of applying raw Kubernetes YAML files, I used Helm to manage deployments in a modular and reusable way.
Chart.yaml:
Defines metadata about the Helm chart:
apiVersion: v2
name: myapp
description: Helm chart for frontend + backend microservices
type: application
version: 0.1.0
appVersion: "1.0"
values.yaml:
Central place to configure application values:
backend:
image: backend-app
replicas: 2
port: 5000
frontend:
image: frontend-app
replicas: 1
port: 80
nodePort: 30007
templates:
Helm uses templates to dynamically generate Kubernetes manifests.
🔹 Backend Deployment (templates/backend-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: {{ .Values.backend.replicas }}
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.backend.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.backend.port }}
🔹 Backend Service (templates/backend-service.yaml)
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- port: {{ .Values.backend.port }}
targetPort: {{ .Values.backend.port }}
🔹 Frontend Deployment (templates/frontend-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: {{ .Values.frontend.replicas }}
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: {{ .Values.frontend.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.frontend.port }}
🔹 Frontend Service (templates/frontend-service.yaml)
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
ports:
- port: {{ .Values.frontend.port }}
targetPort: {{ .Values.frontend.port }}
nodePort: {{ .Values.frontend.nodePort }}
Step 6: Deploy Application Using Helm
Once the application components were ready, I used Helm to package and deploy everything.
1️⃣ Create Helm Chart
helm create myapp
Helm generates a default chart structure with sample templates.
2️⃣ Clean Default Templates
Since my application has a custom structure, I removed unnecessary default files and kept only:
backend-deployment.yaml
backend-service.yaml
frontend-deployment.yaml
frontend-service.yaml
This helped keep the chart minimal and aligned with my architecture.
3️⃣ Deploy Application
helm install myapp ./myapp
This command:
- Packages the chart
- Generates Kubernetes manifests
- Deploys all resources in one step
4️⃣ Upgrade on Changes
Whenever I updated configurations or images:
helm upgrade myapp ./myapp
5️⃣ Verify Deployment
To verify the deployment, use kubectl get pods to check that all pods are created and running, then use kubectl get svc to confirm that the services are up and available.
kubectl get pods
kubectl get svc
6️⃣ Access Application
Run minikube service frontend-service to open the frontend service in your default browser and access the application locally as shown below.
minikube service frontend-service
The browser displays 'Hello from Backend! Host: hostname', confirming that the frontend service is successfully communicating with the backend pod.
Conclusion
This project walked through the full lifecycle of deploying a 2-tier microservices application — from writing a simple Flask API to packaging and managing everything with Helm on a local Kubernetes cluster.
Here's a quick recap of what was covered:
- Containerization — Dockerized both the backend (Python/Flask) and frontend (Nginx) services independently, keeping them loosely coupled.
- Orchestration — Used Kubernetes to manage pod scheduling, scaling, and inter-service communication.
- Local cluster — Ran everything locally using Minikube, simulating a real cluster environment without needing cloud infrastructure.
- Helm deployment — Replaced raw YAML manifests with a structured Helm chart, making the deployment configurable, repeatable, and upgrade-friendly.
The architecture here is intentionally minimal, but it maps directly to patterns used in production systems. Swapping Minikube for a managed cluster (EKS, GKE, AKS) and a local image build for a proper CI/CD pipeline (GitHub Actions, ArgoCD) would take this setup from local dev to production-ready.
The goal of this project was to connect the dots between containers, orchestration, and deployment tooling — and to show that a well-structured local setup is the best foundation for anything you build at scale.
📌 Source Code
👉 GitHub Repo: k8s_helm_deployment
Common Errors & Fixes:
While working on this project, I encountered some real-world issues that are very common when working with Kubernetes and Helm.
Issue: ImagePullBackOff / ErrImagePull
🔍 Error
kubectl get pods
ImagePullBackOff
ErrImagePull
🧠 Root Cause
Kubernetes was trying to pull images like:
backend-app
frontend-app
But these images did not exist in any container registry. By default, Kubernetes attempts to pull images from external sources like Docker Hub.
✅ Fix
Since I was using Minikube, I needed to build images inside Minikube’s Docker environment.
🔧 Steps to Fix
1️⃣ Point Docker to Minikube
& minikube -p minikube docker-env --shell powershell | Invoke-Expression
Build Images Inside Minikube
docker build -t backend-app ./backend
docker build -t frontend-app ./frontend
3️⃣ Prevent External Pulls
Added in deployment:
imagePullPolicy: Never
4️⃣ Redeploy
helm upgrade myapp ./myapp
✅ Final Result
kubectl get pods
All pods:
Running ✅


Top comments (0)