Diagram from the blog doesn't express that, but we, of course, utilize multi-AZ in our dbs and LB.
Although we love Lambda, keep in mind we came outside AWS already having quite nicely scalable and cheap worker solution. True, it is a custom solution, but for now, there is no business value of changing that process to Lambda.
Regarding auto-scaling, based our observations and predictions of our platform utilization, we handle peaks quite well having only k8s pod auto-scaling (In normal conditions that free resource buffer is used for fast, smooth deployments, which happen quite often). For now, there is no reason for focusing on improving that part as well.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Diagram from the blog doesn't express that, but we, of course, utilize multi-AZ in our dbs and LB.
Although we love Lambda, keep in mind we came outside AWS already having quite nicely scalable and cheap worker solution. True, it is a custom solution, but for now, there is no business value of changing that process to Lambda.
Regarding auto-scaling, based our observations and predictions of our platform utilization, we handle peaks quite well having only k8s pod auto-scaling (In normal conditions that free resource buffer is used for fast, smooth deployments, which happen quite often). For now, there is no reason for focusing on improving that part as well.