DEV Community

Cover image for OSD700: Working with BullMQ Worker/Queues (cont'd)
TD
TD

Posted on

OSD700: Working with BullMQ Worker/Queues (cont'd)

This week, I upgraded the notification system in Starchart by implementing background jobs that repeat every 5 mins to query the database for existing DNS records or Certificates expiring in less than a month. If the query returns any data, we check the last time we notified the user. If we have never notified the user, we will send an initial reminder encouraging the user to log in to the system and renew the expiring records.

I am confident that initial reminders will be rolled out. Still, we would need to update the value of lastNotified when we renew expiring DNS records and Certificates so that users can be reminded again. I need to update the code so we can send a second reminder when records are about to expire in less than a week.

Implementing background jobs to roll our notifications in this PR has been one of my most meaningful contributions to the project. It needs improvement, and I intend to perfect it so that any code that builds on top of it does not fail.

Top comments (1)

Collapse
 
intuneteq profile image
Olanitori Tobi

Subject: Issue with BullMQ and Redis - Event Triggering for Multiple Jobs

Hi TD,

I hope you're doing well. I'm currently facing an issue with BullMQ and Redis while sending emails with PDF attachments generated using Puppeteer. I would appreciate any insights or suggestions to help me resolve this problem.

Here's a breakdown of the issue:

  1. I'm using BullMQ and Redis to handle the email sending process. I have a queue where I load jobs using the addBulk method. Each bulk can contain up to 1000 jobs.

  2. The queue passes the jobs to a worker, which picks them one at a time to execute the process. The process involves generating a certificate with Puppeteer and then sending it as an email attachment to the recipient.

  3. After the process is completed for each job, the worker is supposed to listen to the on("completed") event. When this event is triggered, the worker updates the database to mark the certificate as "sent". In case of a failure, the on("failed") event is triggered, and the database is updated to mark the certificate as "not sent".

  4. The problem I'm facing is that all the certificates are being sent successfully, but the on("completed") event is only triggered for the first item in the queue. It ignores the rest of the jobs, leaving their status as "pending" in the database.

  5. It's worth mentioning that everything works as expected on my local server. The issue arises only when I push the code to the production environment.

I have already checked the following:

  • Ensured that the worker is not completing or closing prematurely after processing the first certificate.
  • Verified that the certificates are correctly loaded into the queue using the addBulk method.
  • Confirmed that the Redis server has the necessary configuration (maxmemory-policy=noeviction) to avoid unexpected key removal.

Despite these checks, I haven't been able to identify the cause of the issue. I suspect it may be related to the event listener setup or some configuration difference between my local environment and the production setup.

If anyone has experienced a similar issue or has any suggestions on how to debug and resolve this problem, I would greatly appreciate your help. Thank you in advance for your time and support.

Best regards,
Tobi Olanitori