<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bar-Dov</title>
    <description>The latest articles on DEV Community by Bar-Dov (@lbd).</description>
    <link>https://dev.to/lbd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lbd"/>
    <language>en</language>
    <item>
      <title>Avoiding Redis Crashes with BullMQ: Memory Monitoring Basics</title>
      <dc:creator>Bar-Dov</dc:creator>
      <pubDate>Mon, 14 Jul 2025 09:15:00 +0000</pubDate>
      <link>https://dev.to/lbd/avoiding-redis-crashes-with-bullmq-memory-monitoring-basics-2848</link>
      <guid>https://dev.to/lbd/avoiding-redis-crashes-with-bullmq-memory-monitoring-basics-2848</guid>
      <description>&lt;p&gt;If you’re using BullMQ with Redis in production, you’ve probably dealt with a "wtf happened" moment after Redis hit its memory limit. 🧨😐&lt;/p&gt;

&lt;p&gt;We hit this once (on a Sunday, naturally). Since then, we started tracking Redis memory usage per instance and get notified when we hit ~80% of capacity.&lt;/p&gt;

&lt;p&gt;💡 Turns out some Redis hosts don’t expose their max memory, so we set a manual threshold in MB. Helps us catch issues way before Redis shuts the door on writes.&lt;/p&gt;

&lt;p&gt;We later baked this into our own job monitoring tool. If you want to keep things simple and focused on BullMQ, we made &lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt; to alert on stuff like memory spikes, stuck queues, and missing workers.&lt;/p&gt;

&lt;p&gt;The Max Memory monitors page looks like this (see below)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80tmoj7tdgrg722egtst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80tmoj7tdgrg722egtst.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But tool or not—do monitor your Redis memory. Especially if you’re chaining long-running jobs or dealing with a backlog.&lt;/p&gt;

&lt;p&gt;What are you all using? Custom scripts? Hosted dashboards?&lt;/p&gt;

</description>
      <category>bullmq</category>
      <category>redis</category>
      <category>monitoring</category>
      <category>queues</category>
    </item>
    <item>
      <title>How We Handle BullMQ Queue Backlogs (Without Stressing Over It)</title>
      <dc:creator>Bar-Dov</dc:creator>
      <pubDate>Sun, 06 Jul 2025 12:11:07 +0000</pubDate>
      <link>https://dev.to/lbd/how-we-handle-bullmq-queue-backlogs-without-stressing-over-it-5d7k</link>
      <guid>https://dev.to/lbd/how-we-handle-bullmq-queue-backlogs-without-stressing-over-it-5d7k</guid>
      <description>&lt;p&gt;One of the most overlooked issues when working with BullMQ (or any job queue, really) is quietly building backlogs. It doesn’t crash the system—but it slows things down until something breaks.&lt;/p&gt;

&lt;p&gt;After a few rough incidents (hello weekend alerts 😅), we started tracking how many jobs are waiting, prioritized, or delayed per queue. If the number crosses a certain threshold, we get alerted and scale up.&lt;/p&gt;

&lt;p&gt;🔧 Pro tip: If you’re not monitoring queue backlog yet, set a soft threshold based on your infra. Even just 100 pending jobs on a medium-sized queue might be enough to indicate a bottleneck.&lt;/p&gt;

&lt;p&gt;We ended up building a minimal dashboard just for this—eventually turned it into &lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt;. Might help if you’re looking for something more lightweight than full observability platforms. No config hell, just watch your queues.&lt;/p&gt;

&lt;p&gt;Below you can see the page where we can easily set backlog alerts&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfog6lxl1mtuc6x9j1ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfog6lxl1mtuc6x9j1ul.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear how others handle backlog monitoring. Manual scripts? Prometheus? No clue and vibes?&lt;/p&gt;

</description>
      <category>bullmq</category>
      <category>bullbaord</category>
      <category>queue</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>BullMQ UI: Why Bull Board May Not Be Enough (And How Upqueue.io Helps)</title>
      <dc:creator>Bar-Dov</dc:creator>
      <pubDate>Wed, 25 Jun 2025 15:56:16 +0000</pubDate>
      <link>https://dev.to/lbd/bullmq-ui-why-bull-board-may-not-be-enough-and-how-upqueueio-helps-1nd</link>
      <guid>https://dev.to/lbd/bullmq-ui-why-bull-board-may-not-be-enough-and-how-upqueueio-helps-1nd</guid>
      <description>&lt;h2&gt;
  
  
  When your queue looks “quiet,” it might actually be failing silently
&lt;/h2&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bull Board&lt;/strong&gt; is a solid starting point with basic job visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt;&lt;/strong&gt; is built for production observability: &lt;strong&gt;alerts&lt;/strong&gt;, &lt;strong&gt;metrics&lt;/strong&gt;, &lt;strong&gt;child-job support&lt;/strong&gt;, and &lt;strong&gt;UI polish&lt;/strong&gt; make a difference.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you're running &lt;a href="https://docs.bullmq.io/" rel="noopener noreferrer"&gt;BullMQ&lt;/a&gt; in production, you’ve likely got jobs, workers, and retries configured. But all the functionality in the world won’t help if you can’t &lt;strong&gt;see what’s actually happening&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That’s where a true &lt;strong&gt;BullMQ UI&lt;/strong&gt; becomes essential.&lt;/p&gt;

&lt;p&gt;A Good BullMQ UI Should Offer&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Live status (active, delayed, completed, failed)&lt;/li&gt;
&lt;li&gt;✅ Job retry/delete controls&lt;/li&gt;
&lt;li&gt;✅ Job context with JSON/logs&lt;/li&gt;
&lt;li&gt;✅ Historical trends for queues&lt;/li&gt;
&lt;li&gt;✅ Alerts for failures, stalled queues, memory/connection issues&lt;/li&gt;
&lt;li&gt;✅ Visibility into child (nested) jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s compare &lt;strong&gt;Bull Board&lt;/strong&gt; — the go‑to starter UI — with &lt;strong&gt;&lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt;&lt;/strong&gt;, which is built for observability-first production use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Comparison: Bull Board vs &lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kgylp9yxg70e482b3oz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kgylp9yxg70e482b3oz.png" alt="Image description" width="720" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Setup Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
// queue.ts
import { Queue, Worker } from 'bullmq';

export const reportQueue = new Queue('report', {
  connection: { host: 'localhost', port: 6379 },
});

new Worker('report', async job =&amp;gt; {
  return await generatePDF(job.data);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All good—but to run safely, you need a UI that shows &lt;em&gt;when&lt;/em&gt; something breaks.&lt;/p&gt;

&lt;p&gt;Upqueue.io connects to your Redis and instantly provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Failed Jobs monitor&lt;/li&gt;
&lt;li&gt;✅ Connection &amp;amp; Memory alerts&lt;/li&gt;
&lt;li&gt;✅ Missing Workers tracking&lt;/li&gt;
&lt;li&gt;✅ Backlog alerts&lt;/li&gt;
&lt;li&gt;✅ Child‑job tab and retry controls&lt;/li&gt;
&lt;li&gt;✅ Clean, developer-friendly interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Oh, and bonus—&lt;strong&gt;a real team behind it&lt;/strong&gt;, with fast support and new features dropping regularly.&lt;/p&gt;




&lt;p&gt;Learn more and explore the dashboard: &lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;upqueue.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bullmq</category>
      <category>redis</category>
      <category>queue</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Using BullMQ to Power AI Workflows (with Observability in Mind)</title>
      <dc:creator>Bar-Dov</dc:creator>
      <pubDate>Sun, 08 Jun 2025 10:29:38 +0000</pubDate>
      <link>https://dev.to/lbd/using-bullmq-to-power-ai-workflows-with-observability-in-mind-1ieh</link>
      <guid>https://dev.to/lbd/using-bullmq-to-power-ai-workflows-with-observability-in-mind-1ieh</guid>
      <description>&lt;p&gt;As AI-based applications become more sophisticated, managing their asynchronous tasks becomes increasingly complex. Whether you’re generating content, processing embeddings, or chaining together multiple model calls—queues are essential infrastructure.&lt;/p&gt;

&lt;p&gt;And for many Node.js applications, BullMQ has become the go-to queueing library.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through why BullMQ fits well into AI pipelines, and how to handle some of the pitfalls that come with running critical async work at scale.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Why BullMQ Makes Sense for AI Workflows
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;AI jobs are often:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CPU/GPU intensive&lt;/strong&gt; (model inference)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Long running&lt;/strong&gt; (fine-tuning, summarizing large chunks)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chainable&lt;/strong&gt; (one output feeds the next)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Best handled asynchronously&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Queues help break down these processes into manageable, distributed units.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: A Simple AI Pipeline with BullMQ
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re building a summarization service.&lt;/p&gt;

&lt;p&gt;User submits a document.&lt;/p&gt;

&lt;p&gt;The job is queued.&lt;/p&gt;

&lt;p&gt;A worker generates the summary.&lt;/p&gt;

&lt;p&gt;A follow-up task sends it via email.&lt;/p&gt;

&lt;p&gt;Here’s how you might structure that with BullMQ:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// queues.ts
import { Queue } from 'bullmq';
import { connection } from './redis-conn';

export const summarizationQueue = new Queue('summarize', { connection });
export const emailQueue = new Queue('email', { connection });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// producer.ts
await summarizationQueue.add('summarizeDoc', {
  docId: 'abc123',
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// summarization.worker.ts
import { Worker } from 'bullmq';
import { summarizationQueue, emailQueue } from './queues';

new Worker('summarize', async job =&amp;gt; {
  const summary = await generateSummary(job.data.docId);

  await emailQueue.add('sendEmail', {
    userId: job.data.userId,
    summary,
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can imagine how this might expand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Queue for transcription&lt;/li&gt;
&lt;li&gt;Queue for sentiment analysis&lt;/li&gt;
&lt;li&gt;Queue for search index updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to Watch Out For
&lt;/h2&gt;

&lt;p&gt;When you're handling large numbers of AI jobs:&lt;/p&gt;

&lt;p&gt;Memory usage spikes can crash your Redis instance.&lt;/p&gt;

&lt;p&gt;Worker failures can leave queues silently stuck.&lt;/p&gt;

&lt;p&gt;Job retries without proper limits can pile up fast.&lt;/p&gt;

&lt;p&gt;These are hard to track without some sort of observability layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good Practices for AI Queue Systems
&lt;/h2&gt;

&lt;p&gt;✅ Use job removeOnComplete: true to avoid memory buildup&lt;br&gt;
✅ Set attempts and backoff on your long-running jobs&lt;br&gt;
✅ Monitor failed jobs &amp;amp; queue lengths&lt;br&gt;
✅ Alert on missing workers or high backlog&lt;/p&gt;

&lt;p&gt;Even a minimal dashboard that shows which queues are stuck or which workers are down can save hours.&lt;/p&gt;

&lt;p&gt;We had to build one ourselves. If you’re looking for something simple and focused, we put together a tool called &lt;a href="https://upqueue.io/" rel="noopener noreferrer"&gt;Upqueue.io&lt;/a&gt; that visualizes BullMQ jobs and alerts you when things go wrong. But whether it’s a custom script, Prometheus, or something else - &lt;strong&gt;just make sure you’re not flying blind.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BullMQ is a great fit for AI apps. But the more you scale, the more you need to see what’s going on.&lt;/p&gt;

&lt;p&gt;Don’t let your GPT worker crash at 3am without you knowing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor early. Sleep better.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>bullmq</category>
      <category>redis</category>
      <category>node</category>
      <category>queues</category>
    </item>
  </channel>
</rss>
