If your get_warm handler returns immediately, it's likely that you aren't warming as many instances as you think because one instance could handle several get_warm events in rapid succession.
Hi Erik, that's really an issue we need to cover. Essentially, the last "get_warm" request should go out before the first one is terminated by the target Lambda.
What we can to do is benchmark the interval taken between "get_warm" requests, then multiply by the number of requests being sent concurrently. We can then pass a parameter to the target Lambdas, asking them to sleep for a few milliseconds to avoid container reuse.
What do you think about this solution?
That's more or less what my team does, although we don't do anything as sophisticated as automatically tuning the warmer with a feedback loop.
I think it would make more sense for AWS to let customers pay a fee to keep some idle capacity running, or to at least expose metrics on it. That would at least be more simple and direct.
Nice. Agreed about AWS as well. It could be something like Dynamo reserved capacity. I read from Jeremy Daly sometime ago that the Lambda team has no plans to release something like this, but they're looking into ways to tackle cold starts in the future.
And that's why attempting to keep lambdas warm is a waste of effort and money.
If your service gets steady traffic and your code is optimized for restarts, cold starts should be a non issue.
You could have the warm up handler sleep for a few seconds so Lambda is more likely to spin up some new capacity. This is no different than keeping some idle capacity on hand to absorb spikes.
Not every service gets steady traffic, even ones where latency is important.
Also, it's not always a matter of optimal start up time because even applications with instant start time have to be deployed by the Lambda service before they can respond. That can take up to a full minute, if I recall the documentation correctly.
How do you guarantee the accuracy of prediction? Like false positive or false negative.
Hi Tony, in terms of time-series modeling, we would have a standard deviation and an interval of confidence.
Let's say the predicted value is 10 containers and the standard deviation (SD) is 1. If the data follows a normal distribution, we can assume with 99% of confidence that the real number of containers needed will fall between 2.5 SDs below or above the predicted value.
Thus, raising the prediction to 13 (10 + 2.5 = 12.5 > rounded to 13) should give us 99% confidence in the prediction.
Of course, we can't expect invocation histories to follow a normal distribution, so we need to test which distribution it more closely matches in order to adjust the confidence interval appropriately.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.