Definitely! Configuration will let you handle how many times you should retry (which depends mainly on your the usage of an API). Once the retry is released with our Node.js and Ruby agents, we will update our documentation with all the retry options.
Sounds like a infrastructure problem, which can be solved assisted by monitoring to give appropriate alerts to trigger adding more resources to help with peak loads.
So when the target server approaches capacity, you've just tripled the demand on the server so it will crash faster.
Can you help explain the problem being solved with this?
It's not clear in this post what the back off strategy is, but hopefully there's a story or configuration for that.
Definitely! Configuration will let you handle how many times you should retry (which depends mainly on your the usage of an API). Once the retry is released with our Node.js and Ruby agents, we will update our documentation with all the retry options.
Is that really a monitoring problem?
Sounds like a infrastructure problem, which can be solved assisted by monitoring to give appropriate alerts to trigger adding more resources to help with peak loads.
It's more of a supply/demand problem, and when there's an issue on the supply side (target server), the demand will go up 3x.
A better solution is to cache the call to the target, and revalidate it in a separate thread.
One easy way to do this is to use a reverse proxy with stale-while-revalidate
Retry is one simple feature to handle an API downtime, but it's surely not the only thing that will help your app to stay up.
There's a great article on Dev.to on the main mechanisms to improve reslience on your app. At Bearer, we're starting with retry, but we are planning to add them all to our Agent.