For a production deployment of a microservice web app using different languages with several Docker images and EC2 instances, you do not necessarily need Kubernetes (k8s).
I've always have those questions in my head, after knowing some use case of K8s I got confused why there are so many ways to do the same result. I did a resume from what I've found, you are welcomed to discuss if you have real experiences on this topic.
The cheapest way:
It's like you build your own car from scratch, sourcing all the parts and assembling them yourself. That might seem cheaper in terms of initial component costs. However, buying a manufactured car gives you a reliable, tested vehicle with integrated features, safety standards, and a warranty, saving you significant time and effort.
You need mannually manage Deployment (configuring port mappings and environment variables), Scaling (load balancer ALB), Service Discovery Front-Back communication (DNS server), Updates (avoiding downtime), health checks ...
Downside can be heavy: Time-Consuming, Error-Prone, Slow to Scale, difficult to Monitor.
- With AWS
You can just rent a EC2, however,
you need Docker Compose, probobly write your own bash to manage the deployment, scaling, and updates of your Docker containers.
If it's for Simple Applications without frequent evolution, then AWS Elastic Beanstalk with or without Docker can be enough.
- With GCP
App Engine Standard or Flexible & Cloud Run
The middle way on AWS :
- With AWS ECS
ECS Cluster: You create an ECS cluster, which is a logical grouping of your EC2 instances (or you can use Fargate and not manage instances directly).
Task Definitions: You define how each of your services (frontend, backend, etc.) should run in a "Task Definition." This includes the Docker image to use, resource requirements (CPU, memory), port mappings, environment variables, and more.
Services: You create "Services" within your ECS cluster for your backend and frontend. A service ensures that a specified number of tasks (containers) are running and healthy at all times.
4.Deployment:
You push your Docker images to ECR.
You update the Task Definition for your backend service to use the new image.
ECS handles the rolling update of your backend containers, ensuring minimal downtime.
5.Scaling:
You can configure auto-scaling for your ECS services based on metrics like CPU utilization or memory usage. ECS automatically adjusts the number of running tasks to handle the load.
6.Load Balancing:
You can easily integrate your ECS services with an Application Load Balancer (ALB). ECS automatically registers and deregisters containers with the ALB as they are launched and terminated.
7.Service Discovery:
ECS provides built-in service discovery through AWS Cloud Map, allowing your frontend to easily find the backend service using a DNS name.
8.Health Checks:
ECS performs health checks on your containers and automatically restarts unhealthy ones. You can also configure application-level health checks.
9.Monitoring:
ECS integrates with CloudWatch for logging and metrics, providing a centralized view of your application's health and performance.
** K8s culture :**
For some hands-on explications you can try :
https://www.youtube.com/watch?v=6wlj-x58lPM
(35 free credits for labs per month)
https://www.cloudskillsboost.google/course_templates/663?catalog_rank=%7B%22rank%22%3A1%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&search_id=45428223
In the context of your e-commerce application, GKE would be a powerful and flexible platform to run all its components. It would provide the scalability and resilience needed for a production environment. The lack of a control plane fee compared to EKS could make it a more cost-effective choice for the Kubernetes route on a major cloud provider.
Top comments (0)