DEV Community

Cover image for End-to-End Deployment of a Two-Tier Application Using Docker, Kubernetes, Helm, and AWS
SAFI-ULLAH SAFEER
SAFI-ULLAH SAFEER

Posted on

End-to-End Deployment of a Two-Tier Application Using Docker, Kubernetes, Helm, and AWS

In modern cloud-native environments, deploying applications manually is no longer scalable or reliable. DevOps practices and container orchestration tools help us automate, standardize, and scale applications efficiently.

In this article, I’ll walk you through an end-to-end deployment of a Two-Tier Application using:

GitHub, Docker & DockerHub, Kubernetes, Helm & AWS.

This architecture represents a real-world DevOps workflow, commonly used in production systems.

Before diving deep into the project implementation or practical demonstration, it’s important to first understand the term "two-tier-application"

A Two-Tier Application Consists of:

Application Tier (Backend):

Flask-based backend service

Handles business logic

Processes API requests

Communicates with the database

This layer is responsible for how the application behaves and responds to user actions.

Database Tier:

MySQL database

Stores and manages application data

Handles queries from the backend

This tier ensures data persistence and consistency.

Deployment Approach

To keep things simple and structured, initially the deployment is done in two stages:

Dockerfile (Single Container Focus)

First, we containerize the Flask backend using a Dockerfile.
This helps us understand:

How Docker images are built

How application dependencies are managed

How a single service runs inside a container

Docker Compose (Multi-Container Setup)

Next, we use Docker Compose to run both tiers together:

Flask Backend container

MySQL Database container

Docker Compose allows both services to:

Run on the same network

Communicate using service names

Start and stop with a single command

This approach represents a real-world development setup and forms the foundation for moving toward Kubernetes in later stages.

First of All Launch an instance with “2-tier-App-DEPLOYMNET” with private key and “ubuntu os” make all the setting by default.
From EC2 Instance Connect option connect it the following screen will appear.

Installing Docker on Ubuntu

To install Docker on Ubuntu, you can use the following commands in your terminal:

Ls
Sudo apt update
Sudo apt install docker.io

To check running containers on docker we run docker ps initially we got error Permission denied while trying to connect to the docker daemon socket.
This happens because your user does not have permission to access the Docker daemon. Let’s see how to resolve it.

Error troubleshoot in just 2 seconds:

Check the user by “whoami”=>got “Ubuntu”
Sudo chown $USER /var/run/docker.sock
Now run “docker ps” command it will run successfully.

Now the next step is to clone the code from github:

Git clone https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git
cd two-tier-flask-app

Now from list “rm Dockerfile” and create your own Dockerfile from scratch.

Understanding the Dockerfile:

The first line in a Dockerfile usually starts with FROM. This specifies the base image for your container — essentially, the operating system with pre-installed software. For example:

FROM python:3.9-slim
Wheares 3.9 is the version of python and slim means image size lightweight

WORKDIR: Application run on a working directory
WORKDIR /app
RUN apt-get update –y \ =>update your conatiner
&& apt-get upgrade –y =>upgrade packages
&& apt-get install –y gcc default-libmysqlclient-dev pkg-config =>install client to run mysql
&& rm –rf /var/lib/apt/lists/*=>remove packages list

COPY requirements.txt >The file in which all the packages you need is written

Install Python packages

RUN pip install mysqlclient
RUN pip install -r requirements.txt

Copy application code into the container
COPY . .

Explanation:
The first dot (.) is the source — your local folder containing the code
The second dot (.) is the destination inside the container (/app)

Define the command to run the application
CMD ["python", "app.py"]

Docker file finally complete now press esc :wq to save the file in vim editor

Now to Build an Image from Docker file:
Docker build . –t flaskapp
Where . Is for current path and –t is for tag and flaskapp is the name of image.

After that we have to run mysql container:

after that we have to run flaskapp container:
To see docker images:
Docker images
T run container from image
Docker run –d –p 5000:5000 flaskapp:latest
-d=run images on background deatttached mode

Now you have to access your application at port 5000 now configure security group.

After that copy the public ip of your instance and search it on browser with adding 5000 port number at the end.

Docker Networking:

Docker network create twotier

The flaskapp image create container which enables at port 5000 network twotier and all the environment varaibles.

Key Point (Memorable)
“In a multi-container setup, Flask is application-level dependent on MySQL: the Flask app requires a live database connection to function properly at startup. Therefore, always start the MySQL container first, then the Flask container — otherwise the Flask app will crash even though it runs in a standalone container.”

Now to check conatiners on a docker network
docker inspect network twotier

Finally, your application is successfully deployed as a two-tier setup, with Flask running on the backend and MySQL managing the database.

Accessing a Docker Container:
Docker exec –it container-id bash
Mysql –u admin –p
Show databases;

message enter by me "AWS Cloud Club MUST" AND "AWS Student Community Day Mirpur 2025"

Here is inside the MySql Container:

Pushing Docker Image to Docker Hub

Docker login
After that tag flaskapp image for docker hub
Docker tag flaskapp:latest safi221/flaskapp

Finally, our image is successfully pushed to Docker Hub. This means it is publicly available, and anyone can pull and run the application from anywhere.

Docker Compose

Next, you might wonder: “How can I run both the backend and database containers simultaneously with a single command?”

This is where Docker Compose comes in — it allows you to define and run multi-container applications with ease.

Installing Docker Compose

First, install Docker Compose on your system
Once installed, you can create and edit the docker-compose.yml file:

YAML file is yet another markup language its syntax is in the form of key-value-pair.

Understanding the Docker Compose File (docker-compose.yml)

Docker Compose allows you to define and run multi-container applications. Let’s break down a typical docker-compose.yml for our two-tier Flask + MySQL app.

Why depends_on is important

The depends_on option ensures that the MySQL container starts before the Flask backend. Without this, Docker might start the backend first, causing connection errors because the database is not yet available. Using depends_on manages the startup order automatically.

Explanation:

volumes → Persist data even if the container stops or is removed. Here, mysql-data binds the container’s MySQL data to system storage.

./message.sql:/docker-entrypoint-initdb.d/message.sql → Initializes the database with tables or seed data on container startup. Docker automatically executes scripts in the /docker-entrypoint-initdb.d/ directory.

depends_on → Ensures the database container is ready before starting the backend.

💡 Tip: You can use an online YAML formatter to ensure proper indentation and readability.

Running the Application

Save the file (:wq in Vim).

Stop any previous containers: Now kill previous containers by using docker kill.

Start the application with Docker Compose:
docker-compose up -d

Final Deployment Through Docker-Compose:

After this, your two-tier Flask + MySQL application is fully deployed using Docker Compose.

Wrapping Up

We’ve successfully containerized a two-tier Flask + MySQL application using Docker and Docker Compose, pushed it to Docker Hub, and learned how to run it seamlessly with a single command.

With this workflow, you can share, run, and scale your applications anywhere, from your local machine to the cloud. 🚀

Next, you can explore Kubernetes and Helm to orchestrate larger applications and deploy them on cloud platforms like AWS taking your containerization skills to the next level!

Keep experimenting, keep sharing, and let’s build smarter, faster, and scalable applications together.

Top comments (0)