I understand your points, but I can’t see the context it use case to have multiple endpoints for the single lambda.
Maybe to launch quickly a poc… ok.
But for long term, coldstart, caching by lambda, monitoring by each endpoint, clarity in aws console, giving some specific optimisation to lambda as dedicated ram, dedicated lifetime, timeout, deadletter behavior… etc … all of this seems really useful for any cases.
In fact, I don’t see this kind of pattern as production-ready. Do you have some example ?
Indeed, as rodolf said, the middy coupled with serverless, and you will have all you need (flexibility for the optimisation).
Anyway, it’s interesting to interact and see differents pov.
What is this drawbacks in our solution compared to the one in this article?
I really want to understand in which specific cases (some examples ?), context you will use that ?
If you don’t agree, there is some specific reason to detail.
I really don’t want to annoy you. I would like to get it through concrete answers, to project myself and see if I shouldn’t use this approach instead.
I have already answered in my previous comments with details. I have already shared three examples where one endpoint per route isn't the best solution.
If these drawbacks doesn't applied to you, it's a good news and the pattern you have followed is the best solution for you.
But, it doesn't mean we all need to follow the same pattern. There isn't only one way to do it.
Ok I didn’t catch your resources limit argument ! Sorry. :-)
And I think what Rob said, he is agree with you but we should not use this pattern when the latency is important for the UX, meaning we need to keep a low coldstart, to be able to respond as quickly as possible :-)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Ok thanks for your reply.
I understand your points, but I can’t see the context it use case to have multiple endpoints for the single lambda.
Maybe to launch quickly a poc… ok.
But for long term, coldstart, caching by lambda, monitoring by each endpoint, clarity in aws console, giving some specific optimisation to lambda as dedicated ram, dedicated lifetime, timeout, deadletter behavior… etc … all of this seems really useful for any cases.
In fact, I don’t see this kind of pattern as production-ready. Do you have some example ?
Indeed, as rodolf said, the middy coupled with serverless, and you will have all you need (flexibility for the optimisation).
Anyway, it’s interesting to interact and see differents pov.
I also understand your points even if I'm not agreed with them.
Both solution has drawbacks and advantages... There is an in-between.
Thank you too for your reply and taking the time.
Ok but you don’t answer to my questions.
What is this drawbacks in our solution compared to the one in this article?
I really want to understand in which specific cases (some examples ?), context you will use that ?
If you don’t agree, there is some specific reason to detail.
I really don’t want to annoy you. I would like to get it through concrete answers, to project myself and see if I shouldn’t use this approach instead.
I have already answered in my previous comments with details. I have already shared three examples where one endpoint per route isn't the best solution.
If these drawbacks doesn't applied to you, it's a good news and the pattern you have followed is the best solution for you.
But, it doesn't mean we all need to follow the same pattern. There isn't only one way to do it.
I really don't know what I can add more.
Ok I didn’t catch your resources limit argument ! Sorry. :-)
And I think what Rob said, he is agree with you but we should not use this pattern when the latency is important for the UX, meaning we need to keep a low coldstart, to be able to respond as quickly as possible :-)