In modern cloud-native environments, deploying applications manually is no longer scalable or reliable. DevOps practices and container orchestration tools help us automate, standardize, and scale applications efficiently.
In this article, I’ll walk you through an end-to-end deployment of a Two-Tier Application using:
GitHub, Docker & DockerHub, Kubernetes, Helm & AWS.
This architecture represents a real-world DevOps workflow, commonly used in production systems.
Before diving deep into the project implementation or practical demonstration, it’s important to first understand the term "two-tier-application"
A Two-Tier Application Consists of:
Application Tier (Backend):
Flask-based backend service
Handles business logic
Processes API requests
Communicates with the database
This layer is responsible for how the application behaves and responds to user actions.
Database Tier:
MySQL database
Stores and manages application data
Handles queries from the backend
This tier ensures data persistence and consistency.
Deployment Approach
To keep things simple and structured, initially the deployment is done in two stages:
Dockerfile (Single Container Focus)
First, we containerize the Flask backend using a Dockerfile.
This helps us understand:
How Docker images are built
How application dependencies are managed
How a single service runs inside a container
Docker Compose (Multi-Container Setup)
Next, we use Docker Compose to run both tiers together:
Flask Backend container
MySQL Database container
Docker Compose allows both services to:
Run on the same network
Communicate using service names
Start and stop with a single command
This approach represents a real-world development setup and forms the foundation for moving toward Kubernetes in later stages.
First of All Launch an instance with “2-tier-App-DEPLOYMNET” with private key and “ubuntu os” make all the setting by default.
From EC2 Instance Connect option connect it the following screen will appear.
Installing Docker on Ubuntu
To install Docker on Ubuntu, you can use the following commands in your terminal:
Ls
Sudo apt update
Sudo apt install docker.io
To check running containers on docker we run docker ps initially we got error Permission denied while trying to connect to the docker daemon socket.
This happens because your user does not have permission to access the Docker daemon. Let’s see how to resolve it.
Error troubleshoot in just 2 seconds:
Check the user by “whoami”=>got “Ubuntu”
Sudo chown $USER /var/run/docker.sock
Now run “docker ps” command it will run successfully.
Now the next step is to clone the code from github:
Git clone https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git
cd two-tier-flask-app
Now from list “rm Dockerfile” and create your own Dockerfile from scratch.
Understanding the Dockerfile:
The first line in a Dockerfile usually starts with FROM. This specifies the base image for your container — essentially, the operating system with pre-installed software. For example:
FROM python:3.9-slim
Wheares 3.9 is the version of python and slim means image size lightweight
WORKDIR: Application run on a working directory
WORKDIR /app
RUN apt-get update –y \ =>update your conatiner
&& apt-get upgrade –y =>upgrade packages
&& apt-get install –y gcc default-libmysqlclient-dev pkg-config =>install client to run mysql
&& rm –rf /var/lib/apt/lists/*=>remove packages list
COPY requirements.txt >The file in which all the packages you need is written
Install Python packages
RUN pip install mysqlclient
RUN pip install -r requirements.txt
Copy application code into the container
COPY . .
Explanation:
The first dot (.) is the source — your local folder containing the code
The second dot (.) is the destination inside the container (/app)
Define the command to run the application
CMD ["python", "app.py"]
Docker file finally complete now press esc :wq to save the file in vim editor
Now to Build an Image from Docker file:
Docker build . –t flaskapp
Where . Is for current path and –t is for tag and flaskapp is the name of image.
After that we have to run mysql container:
after that we have to run flaskapp container:
To see docker images:
Docker images
T run container from image
Docker run –d –p 5000:5000 flaskapp:latest
-d=run images on background deatttached mode
Now you have to access your application at port 5000 now configure security group.
After that copy the public ip of your instance and search it on browser with adding 5000 port number at the end.
Docker Networking:
Docker network create twotier
The flaskapp image create container which enables at port 5000 network twotier and all the environment varaibles.
Key Point (Memorable)
“In a multi-container setup, Flask is application-level dependent on MySQL: the Flask app requires a live database connection to function properly at startup. Therefore, always start the MySQL container first, then the Flask container — otherwise the Flask app will crash even though it runs in a standalone container.”
Now to check conatiners on a docker network
docker inspect network twotier
Finally, your application is successfully deployed as a two-tier setup, with Flask running on the backend and MySQL managing the database.
Accessing a Docker Container:
Docker exec –it container-id bash
Mysql –u admin –p
Show databases;
message enter by me "AWS Cloud Club MUST" AND "AWS Student Community Day Mirpur 2025"
Here is inside the MySql Container:
Pushing Docker Image to Docker Hub
Docker login
After that tag flaskapp image for docker hub
Docker tag flaskapp:latest safi221/flaskapp
Finally, our image is successfully pushed to Docker Hub. This means it is publicly available, and anyone can pull and run the application from anywhere.
Docker Compose
Next, you might wonder: “How can I run both the backend and database containers simultaneously with a single command?”
This is where Docker Compose comes in — it allows you to define and run multi-container applications with ease.
Installing Docker Compose
First, install Docker Compose on your system
Once installed, you can create and edit the docker-compose.yml file:
YAML file is yet another markup language its syntax is in the form of key-value-pair.
Understanding the Docker Compose File (docker-compose.yml)
Docker Compose allows you to define and run multi-container applications. Let’s break down a typical docker-compose.yml for our two-tier Flask + MySQL app.
Why depends_on is important
The depends_on option ensures that the MySQL container starts before the Flask backend. Without this, Docker might start the backend first, causing connection errors because the database is not yet available. Using depends_on manages the startup order automatically.
Explanation:
volumes → Persist data even if the container stops or is removed. Here, mysql-data binds the container’s MySQL data to system storage.
./message.sql:/docker-entrypoint-initdb.d/message.sql → Initializes the database with tables or seed data on container startup. Docker automatically executes scripts in the /docker-entrypoint-initdb.d/ directory.
depends_on → Ensures the database container is ready before starting the backend.
💡 Tip: You can use an online YAML formatter to ensure proper indentation and readability.
Running the Application
Save the file (:wq in Vim).
Stop any previous containers: Now kill previous containers by using docker kill.
Start the application with Docker Compose:
docker-compose up -d
Final Deployment Through Docker-Compose:
After this, your two-tier Flask + MySQL application is fully deployed using Docker Compose.
End-to-End Two-Tier Application Deployment on Kubernetes (Flask + MySQL)
Deploying a two-tier application on Kubernetes requires understanding not only containers but also components of kubernetes Pods, Deployments, Services, and Persistent Storage.
In this article, we will deploy a two-tier Flask + MySQL application on a Kubernetes cluster created using kubeadm.
Before diving deep into the project, it’s important to understand the Kubernetes architecture first how is it works , as shown in the diagram below.
Kubernetes architecture defines how the control plane (Master) and worker nodes communicate to deploy, manage, scale, and heal applications automatically.
It separates cluster management (API Server, Scheduler, Controller, etcd) from application execution (Pods, Kubelet, Services), which ensures high availability, scalability, and fault tolerance.
Without understanding this architecture, it’s difficult to design reliable, production-ready Kubernetes deployments.
Kubernetes Architecture (Master & Node)
Before moving into the project implementation, it is important to understand the Kubernetes architecture, as shown in the image above. Kubernetes follows a master–worker (node) architecture, where the responsibilities are clearly divided to manage and run applications efficiently.
In Kubernetes, we mainly have two types of servers:
Master Server (Control Plane)
Node Server (Worker Node)
To understand this better, we can compare Kubernetes to a software company structure.
Master Server (Decision-Making Team)
The Master server acts like the administration or decision-making team in a software company. It does not run application containers directly; instead, it manages and controls the entire cluster.
Key components of the Master server are:
API Server
The API Server acts like a Team Lead. It is the central communication point of Kubernetes. All requests from users, kubectl, or internal components go through the API Server. It communicates with the Scheduler and Node components to decide where and how applications should run.
Scheduler
The Scheduler works like an HR team in an organization. Its job is to decide which node should run which Pod or container, based on available resources and constraints.
etcd
etcd is the database of Kubernetes. It stores all cluster data such as Pod states, node information, configurations, and secrets. Just like a company maintains records of employees and projects, Kubernetes stores all cluster information in etcd.
Controller Manager
The Controller Manager acts like a Project Manager. It continuously monitors the cluster and ensures that the desired state matches the actual state. If a Pod crashes, the Controller Manager makes sure a new one is created automatically.
Node Server (Worker / Execution Team)
The Node server, also called the Worker node, is like the Research and Development (R&D) team in a software company. This is where the actual application runs.
Key components of the Node server are:
Kubelet
Kubelet works like a reporting employee. It runs on every node and continuously reports the status of Pods and containers back to the API Server. It ensures that containers are running as instructed by the Master.
Service Proxy
Service Proxy acts as a network connector. It allows communication between Pods and enables access to the application from the outside world by routing traffic to the correct Pod.
Pods & Containers
Pods contain one or more containers where the actual application code runs.
kubectl & Networking
Kubectl acts like the CEO of the organization. It gives commands to the API Server to deploy, scale, or manage applications in the cluster.
CNI (Container Network Interface)
CNI (such as Calico or Weave Net) acts like the internal communication system of the company, enabling seamless networking between Pods across different nodes.
Understanding this architecture helps us design scalable, fault-tolerant, and production-ready Kubernetes applications
After that open the AWS login console and create two instances
one is k8s-master server and another one k8s-node server.
Add Rules to the Security Group:
Allow SSH Traffic (Port 22):
Type: SSH
Port Range: 22
Source: 0.0.0.0/0 (Anywhere) or your specific IP
Allow Kubernetes API Traffic (Port 6443):
Type: Custom TCP
Port Range: 6443
Source: 0.0.0.0/0 (Anywhere) or specific IP ranges
Save the Rules:
Click on Create Security Group to save the settings.
After that Connect or SSH Both Master and Node Server:
Install Kubernetes Prerequisites (Run on Both Master & Node)
Before initializing the Kubernetes cluster, we need to install the required Kubernetes components on both the Master and Worker (Node) servers.
Step 1: Update System & Install Required Packages
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Purpose:
This command prepares the system to securely download Kubernetes packages from external HTTPS repositories. It:
Enables HTTPS-based package downloads
Verifies SSL certificates to prevent compromised sources
Uses curl to fetch remote data
Allows GPG verification to ensure package authenticity
Step 2: Add Kubernetes GPG Signing Key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Purpose:
This command downloads the official Kubernetes GPG key and stores it locally.
It ensures that only trusted and signed Kubernetes packages can be installed on the system.
Step 3: Add Kubernetes APT Repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
Purpose:
This adds the official Kubernetes repository to the system’s APT sources and links it with the GPG key, ensuring secure and verified package installation.
Step 4: Update Again
sudo apt-get update
This updates the system package index so APT becomes aware of the newly added Kubernetes repository and can download Kubernetes packages.
Step 5: Install Kubernetes Components
sudo apt-get install -y kubelet kubeadm kubectl
Purpose:
This installs the core Kubernetes components:
kubelet – Runs on every node and manages Pods
kubeadm – Used to bootstrap and manage the Kubernetes cluster
kubectl – Command-line tool to interact with the Kubernetes cluster
Together, these components enable cluster initialization, node communication, and workload management.
After Executing all the commands the Master server will look like:
Then after that configure NodeServer also:
Now after that verfiy on Master by running "kubectl get nodes"
After that check on Node Server it will show that "This node has joined the cluster"
You can check out the complete documentation on how to install and run a** Kubernetes cluster using kubeadm** here:
Concept of Pod in Kubernetes:
Pod
A Pod is where your Docker containers run.
It is the smallest unit of Kubernetes where your application is actually running.
Kubernetes uses the Container Runtime Interface (CRI) in the background, and internally it calls containerd to run containers.
A Pod acts like a house for containers. Inside a Pod, you can define:
Environment variables
Resource limits
Application configuration
All required resources are enclosed inside a Pod.
Containers cannot scale alone. In Kubernetes, we scale Pods, not individual containers.
Multiple containers run inside Pods, and creating multiple Pods is called scaling.
Clone the Application Repository:
git clone https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git
After cloning, move into the Kubernetes directory where all manifest files are present:
Now configure the "twotier-app-pod.yml" file:
press :wq and then press enter to save the file
After that run "Kubectl apply –f two-tier-app-pod.yml"
The pod is successfully configured now.
After that moving towards Deployment.
Deployment
A Deployment is used to manage and configure Pods.
In a Deployment:
We define a Pod template
Kubernetes creates multiple replicas of that Pod based on our requirement
In production, Deployments provide:
Auto-scaling
Auto-healing
High availability
If any Pod or container crashes, Kubernetes automatically creates a new one, ensuring the application runs smoothly.
Configure Deployment file "twotier-app-Deployment.yml :
after that press :wq and enter to save the
After that run "Kubectl apply –f two-tier-app-Deployment"
and then check pods by running "kubectl get pods"
Service
To give access to a Deployment from the outside world, we need a Service.
A Service provides a single, stable entry point to access the application.
When a user wants to access the application from outside, they cannot directly access Pods, because Pods are dynamic and each Pod has its own IP address.
Instead, the user first accesses the Service, and the Service then forwards the request to the Deployment.
Since a Deployment can have multiple Pods and multiple container IPs, the Service acts as a single logical node and load-balances traffic across all Pods.
These three components Pod, Deployment, and Service are very important to run a single-tier application in Kubernetes.
In a multi-tier application, such as when we also have a database layer, additional components like Persistent Volumes and Persistent Volume Claims are required.
Now Configure "twotier-app-svc.yml"
after that check it by "kubectl get service"
After that create a directory on path for persistent volume: /home/ubuntu/two-tier-flask-app/mysqldata
After that create two-tier-flask-app/k8s$ vim mysql-pv.yml
Now after that two-tier-flask-app/k8s$ vim mysql-pvc.yml
Persistent Volume vs Persistent Volume Claim
Persistent Volume (PV) is used to create or allocate storage at the cluster level.
Persistent Volume Claim (PVC) is used to request storage by specifying how much storage an application needs.
In simple terms:
PV provides the actual storage
PVC asks for the required amount of storage
Kubernetes automatically matches a PVC with an appropriate PV and attaches it to the pod.
Now after that run "kubectl apply -f twotier-app-pv.yml"
and "kubectl apply -f twotier-app-pvc.yml"
Now after that create vim mysql-deployment.yml
and then press :wq and then enter and then configured it by running "kubectl apply -f mysql-deployment.yml"
After that create /two-tier-flask-app/k8s$ vim mysql-svc.yml
after that verify all the nodes, pods and services
Now you can access your Flask app from browser:
Use any node’s IP (control-plane or worker). In my case:
Master node: ip-172-31-33-58
Worker node: ip-172-31-37-23
Combine it with the NodePort 30004 later i have updated 30007 with port 30004 you can also confiure it on anyport between (In Kubernetes, the default port range for a NodePort service is 30000-32767):
*Finally, the application is live and running successfully *
Wrapping Up
We’ve successfully taken a two-tier Flask + MySQL application from containerization with Docker and Docker Compose all the way to production-ready deployment on Kubernetes.
From building images and pushing them to Docker Hub to orchestrating services, scaling pods, and exposing the application via Kubernetes, this journey covered the complete modern DevOps workflow.

















































Top comments (0)