Introduction
What is a Multi-container Application
A multi-container application, also known as a multi-container pod or multi-container service, is a software application that consists of multiple containers that work together to perform a specific task or set of tasks. Typically, container orchestration tools like Kubernetes or Docker Compose are used to manage and orchestrate these containers.
Each container within the same pod or service in a multi-container application is built to handle a particular component of the functionality of the application. Within the same pod or service, these containers can share data or resources and communicate with one another across a common network namespace
Advantages of using Multi-container Applications
We would be discussing some of the advantage of using static web apps to host our applications:
Modularity - Multi-container apps encourage modularity by enabling you to divide your application into more manageable, smaller components. It is simpler to design, test, and maintain your program when each container may represent a distinct section of it.
Isolation - Containers offer a high level of isolation between the various components of your application. Every container has its own isolated environment, complete with file system and dependencies, that it uses to run. Conflicts are avoided and changes to one container don't have an impact on others thanks to this isolation.
Scalability - Depending on their resource needs, individual containers can be scaled individually. With this flexibility, you can provide additional resources to parts of your application that require them, thereby enhancing its overall performance.
Rapid Deployment - Containers are designed to be deployed quickly. With little overhead, containers can be created, started, stopped, and updated. Modern application development and continuous integration/continuous deployment (CI/CD) pipelines require this agility.
Prerequisites for the project
There are a few prerequisites needed before we can deploy the application
A Github account
An AWS account
AWS CLI configured
EKSCTL installed
Configuration
The github repository containing the code must first be cloned. You can find the starter files here. After the prerequisites have been satisfied and you have cloned the repository to your local machine, you can simply navigate to your folder and run your cluster.yaml file
After your kubernetes cluster has been successfully created, you can navigate and verify from the AWS console that your cluster was built successfully
Run this command to apply the files and create the services and pods
kubectl apply -f ./config
You can view your running services here and also get the external IP address for your application
You can navigate to the external IP address to view your running application
A word on cleanup
It is always best practice to delete your resources whenever they are not in use. Even if you would not be using them for a couple of hours, you should still delete your resources. You can always start them up again when you're ready. This will help you prevent being billed for resources that you do not need.
You can simply run these commands to delete your cluster and all accompanying resources.
kubectl delete -f ./config
eksctl delete cluster -f cluster.yaml
Top comments (2)
perfect article
Thank you