If you're familiar with Azure CosmosDB, you may have heard of / encountered the 16500 error - Request rate is large. Well, I knew about it, but I don't remember fixing it in the past, at least not using NestJS. What is it about? This error occurs when the request rate to the database is too high. In my case, I tried to populate a collection with some seed data (around 200 new records).
What's the solution? There are a couple of ways to solve it:
Increase the Request Unit (RU) value in Azure CosmosDB - which, of course, implies additional costs.
Batch requests - instead of making individual requests, you can batch your requests together. This will reduce the overall request rate and help resolve the issue.
Implement some sort of retry mechanism - by configuring the application to automatically retry requests that fail due to the Request rate is large error.
Let's see how we can implement the third solution. In our case, we're going to insert some data into a collection and retry until a). all records are inserted b). retry count is reached.
private async insertManyWithRetry(
collection: Model<TestDocument>,
data: any[],
retries: number = 10,
delay: number = 300
) {
try {
if (retries <= 0) {
Logger.log('Done retrying');
return;
}
return await collection.insertMany(data);
} catch (error) {
Logger.error(error);
if (error.code !== 16500) {
throw new InternalServerErrorException(error);
}
setTimeout(async () => {
const processedRecordsIds = error.insertedDocs.map((elem) => elem.id);
const unprocessedRecords = data.filter(
(record) => !processedRecordsIds.includes(record.id)
);
await this.insertManyWithRetry(collection, unprocessedRecords, retries - 1);
}, delay);
}
}
Let's have a look at the function insertManyWithRetry
, it receives the following parameters:
-
collection
- where we want to store data -
data
- array of JSON documents -
retries
- limit of retries, if none provided, default value is 10 -
delay
- time to wait until next retry, if none provided, default value is 300ms
First, we're going to insert data into the collection. This request can be either successful or generate an error, which will be caught by the error block.
According to the MongoDB documentation, the insertMany
method will return this sort of document in case of write error:
BulkWriteError({
"writeErrors" : [
{
"index" : 24,
"code" : 16500,
"errmsg" : "Response status code does not indicate success: TooManyRequests (429)",
"op" : {
"_id" : 13,
}
}
],
"writeConcernErrors" : [ ],
"insertedDocs": [ ],
"nInserted" : 1,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upsert
This tells us all we need to know about our request - what's the error message, error code and which operation / document generated the error. It also provides some stats about the documents (how many were modified or upserted, etc.) and the array of the documents that were inserted.
In case of error:
1). if the error code is not 16500 => log and throw an internal server error exception.
2). if the error code is 16500 => make another attempt at inserting data, once the documents that were already inserted are filtered out.
In our case, unprocessed records are stored in the unprocessedRecords
variable. We can invoke the function again, with the new values: await this.insertManyWithRetry(collection, unprocessedRecords, retries - 1);
, which:
- can be successful and insert all the remaining documents
- can generate a new 16500 error => re-apply the same mechanism (check the error, filter out inserted documents and retry with a delay of 300 ms)
That's it. The implementation can be enhanced by logging more information, errors and retry attempts and using promises, but the logic remains the same.
Any thoughts?
Additional resources on the topic:
- MongoDB documentation
- Prevent rate-limiting errors for Azure Cosmos DB for MongoDB operations
- Common issues in Azure Cosmos DB
Thank you for reading! 🐾
Top comments (0)