Good case study!! Was just curious about auto-scaling and failover as you had mentioned in the blog.
Are you not using multi-AZ configuration? Making the database and load balancer available in multiple availability zones would be a good solution for failover protection.
Also for microservices running non-API related jobs you can use AWS Lambda. Would be a cost-effective solution.
Diagram from the blog doesn't express that, but we, of course, utilize multi-AZ in our dbs and LB.
Although we love Lambda, keep in mind we came outside AWS already having quite nicely scalable and cheap worker solution. True, it is a custom solution, but for now, there is no business value of changing that process to Lambda.
Regarding auto-scaling, based our observations and predictions of our platform utilization, we handle peaks quite well having only k8s pod auto-scaling (In normal conditions that free resource buffer is used for fast, smooth deployments, which happen quite often). For now, there is no reason for focusing on improving that part as well.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Good case study!! Was just curious about auto-scaling and failover as you had mentioned in the blog.
Are you not using multi-AZ configuration? Making the database and load balancer available in multiple availability zones would be a good solution for failover protection.
Also for microservices running non-API related jobs you can use AWS Lambda. Would be a cost-effective solution.
Diagram from the blog doesn't express that, but we, of course, utilize multi-AZ in our dbs and LB.
Although we love Lambda, keep in mind we came outside AWS already having quite nicely scalable and cheap worker solution. True, it is a custom solution, but for now, there is no business value of changing that process to Lambda.
Regarding auto-scaling, based our observations and predictions of our platform utilization, we handle peaks quite well having only k8s pod auto-scaling (In normal conditions that free resource buffer is used for fast, smooth deployments, which happen quite often). For now, there is no reason for focusing on improving that part as well.