So I had just finished building my fitness/progress tracker app's backend, basically a node app with Postgresql. It was already deployed on a ec2 machine which I go ssh into, take a pull and create the new image and run it.But at that time I thought this setup is boring. let's complicate it and make it production level, like how big apps do, so I did take refference of one production app I'm working on for the setup, read a few blogs, did gpt, understood one of the most common/stable setups in the industry. idk if that is even right. please do correct me if I'm not.
So after the research my data flow was ready. planned the entire infra setup for the application. below is an image I created in erasor:
so I'll give a walkthrough of the flow of a request,when a user makes a request, it hits our 'alb' - i.e is a layer 7 load balancer sitting in the public subnet and has its own security group which allows only http/https.
The alb then forwards the request to a target group, which maintains a list of healthy ec2 instances. The Target Group constantly health-checks our instances by hitting the /health endpoint every 30 seconds. If an instance fails two consecutive checks, it's marked unhealthy and traffic stops flowing to it.
These ec2 instances are managed by an auto scaling group, which ensures we always have a minimum of 2 instances running and can scale up to 10 based on CPU load. The instances sit in private subnets with no public IPs — they're protected by a security group, which only accepts traffic from the ALB.
ok so atlast it reaches our ec2 machine. there I've setup nginx, there wasn't any need but i just wanted to experiment, so nginx acting as a reverse proxy inside the ec2 machine proxies our request to our dockerized containers. so if we have multiple containers running we can proxy requests to those multiple containers. even tho I haven't set that up. will do it some upcoming day tho ig.
the app processes the request and if it needs data, it talks to our postgresql database on rds. the database sits in its own private subnet with a security group that only lets our ec2 instances connect to it
once the app finishes processing, the response travels back through the same path: docker container → nginx → alb → user. this might look like a long journey, long journey it feels now that I'm writing it down.so this would be the req flow.
and behind the scene we have promtail, an agent that ships logs to our monitoring server. where we have grafana+loki setup. which helps us monitor the load and logs of our zero user app.
and let me just wrap this up with the ci/cd part. whenever i push or merge to the main branch, a github actions workflow kicks off. it builds a docker image of the latest code, pushes it to ecr (amazon's container registry), then connects to each ec2 instance via ssm and does a rolling deployment — pulls the new image, stops the old container, starts the new one, runs a health check. waits 45 seconds between instances so there's always at least one server handling traffic. zero downtime deployments.
so guys any thoughts on how this could be better or any mistakes i made? idk any feedback would be appreciated

Top comments (0)