BullMQ is a powerful and flexible job queuing system that helps manage background tasks in Node.js applications. As your application grows, it’s crucial to ensure the scalability and resilience of your job processing infrastructure.
In this guide, you’ll learn how to set up BullMQ workers on AWS using ElastiCache for Redis as your data store and ECS (Elastic Container Service) for deploying and managing the queue workers.
What You Will Learn
Setting Up BullMQ Queues and Workers Locally
Before deploying to AWS, let’s start with setting up a BullMQ worker in your local environment.
1. Start a Redis Container in Docker
To run Redis locally, use Docker:
docker run --name redis-container -p 6379:6379 -d redis
This command pulls the latest Redis image from Docker Hub (if not already available locally) and starts a Redis container, exposing the port 6379
for communication with BullMQ.
2. Initialize a Node.js Project
Next, let’s set up your Node.js project. Navigate to your desired project directory and initialize the project using:
npm init -y
For this tutorial, we’ll follow this simple project structure:
BullMQ/
├── src/
│ ├── worker.js
├── .env
├── dockerfile
└── package.json
3. Install Required Packages
Now add two packages,
npm i bullmq dotenv
4. Configure the Worker
Create a file worker.js
inside the src/
directory.
This file will contain the code for setting up the BullMQ queue and processing jobs. Start by importing the necessary modules and configuring Redis:
// worker.js
import dotenv from "dotenv"
import { Queue, Worker } from "bullmq";
dotenv.config()
const connection = {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
};
// Initialize the queue
const myQueue = new Queue('myQueue', { connection });
Next, create a .env
file in your root directory with the following configuration:
REDIS_HOST=localhost
REDIS_PORT=6379
This connects BullMQ to the local Redis instance running on localhost:6379
.
5. Add a Demo Job to the Queue
To demonstrate how the queue works, we’ll add a demo job if the queue is empty.
Add the following code to worker.js
:
// worker.js
// Add a job to the queue if it's empty (for demo purposes)
myQueue.getJobCounts().then((counts) => {
if (counts.waiting === 0 && counts.active === 0) {
console.log('Queue is empty, adding a demo job.');
myQueue.add('myJob', { foo: 'bar' });
}
});
This snippet checks the queue for unprocessed jobs. If no jobs are waiting or active, it adds a demo job with some sample data ({ foo: 'bar' }
).
6. Add a Worker to Process Jobs
Now, let’s create a worker to process jobs from the queue:
// worker.js
// Worker logic to process jobs
const worker = new Worker('myQueue', async (job) => {
console.log(`Processing job ${job.id} with data:`, job.data);
await new Promise((resolve) => setTimeout(resolve, 1000)); // Simulate job processing delay
console.log(`Job ${job.id} processed successfully`);
}, { connection });
// Error handling for failed jobs
worker.on('failed', (job, err) => {
console.error(`Job ${job.id} failed with error: ${err.message}`);
});
// Event listener for job completion
worker.on('completed', (job) => {
console.log(`Job ${job.id} completed successfully`);
});
The worker processes each job from the queue, logging the job’s id
and data. It also handles job failures and logs successful completions. For demonstration, the job takes 1 second to process.
7. Continuously Add Jobs for Testing
To simulate a constant stream of jobs, add a new job every second:
// worker.js
let count = 0
setInterval(() => {
count++
myQueue.add('myJob', { foo: count });
console.log('adding a demo job.', count);
}, 1000)
This will automatically add a new job to the queue every second, helping you test the worker’s ability to process multiple jobs.
8. Run the Project
Finally, add a script in your package.json
to run the worker:
"scripts": {
"start": "node src/worker.js"
}
Now, you can run the project using the following command:
npm run start
This will start the worker, connect it to Redis, and continuously add and process jobs.
Creating Redis database using ElastiCache
To ensure that your BullMQ workers can communicate seamlessly with Redis on AWS, we’ll set up a Redis database using Amazon ElastiCache.
ElastiCache is a fully managed in-memory data store that supports Redis, enabling high-performance and scalable job processing.
1. Create a Security Group for Redis and ECS Communication
Before setting up Redis, we need to create a security group that will allow communication between your ElastiCache Redis instance and your ECS workers.
1. Search for Security Groups: In the AWS console, use the search bar to find Security Groups.
2. Create a New Security Group:
Click on the Create Security Group button.
Name the Security Group: For example, you can name it
BullMQ-Worker
.Select Your VPC: Ensure you choose the correct VPC where your ECS tasks and ElastiCache will reside.
3. Configure Inbound Rules:
Add inbound rules to allow your ECS tasks to communicate with Redis. For instance, add a rule to allow traffic on port
6379
(the default Redis port).Keep outbound rules as they are by default.
4. Create the Security Group: After configuring the settings, click on the Create Security Group button.
2. Set Up the Redis Database Using ElastiCache
Now that the security group has been created, let’s proceed to set up the Redis database on ElastiCache.
1. Search for ElastiCache: In the AWS console search bar, type ElastiCache and select it from the results.
2. Select Redis OSS Caches: On the ElastiCache dashboard, choose Redis from the left sidebar under the Redis OSS caches option.
3. Create a New Redis Cache: Click on the Create Redis OSS Cache button.
4. Configuration Settings:
Deployment Option: Select Design your own cache.
Creation Method: Choose Cluster cache.
Cluster Mode: Set this to Disabled (for single-node Redis setups).
5. Cluster Info:
-
Cluster Name: Provide a name for your Redis cluster, such as
redisCluster
.
6. Cluster Settings:
Node Type: For demonstration purposes, you can select a cost-effective instance like t2.micro.
Number of Replicas: Choose 1 replica to ensure redundancy.
7. Subnet Group Settings:
Subnet Group: If you have a default subnet group, select it. Otherwise, you may need to create one.
To create a new subnet group, go to the left sidebar under Configuration, select Subnet Groups, and create a group with the default settings.
8. Next Steps: After configuring the above settings, click on the Next button.
3. Configure Security and Access Control
1. Encryption in Transit: Enable Encryption in transit to secure communication between your Redis instance and workers.
2. Access Control (AUTH):
Enable AUTH Default User Access. This requires you to set a password, also known as an auth token.
Set Password: Enter a password in the Auth Token input box. This password will be used to authenticate connections to Redis from your BullMQ worker.
3. Select Security Group:
- In the Selected Security Groups section, select the security group you created earlier (
BullMQ-Worker
).
4. Other Settings: Leave other settings as they are by default.
4. Review and Create the Redis Cache
Review Configurations: Review your configurations on the final page to ensure all the settings are correct.
Create Redis Cache: Once you are satisfied with the settings, click on Create to set up your Redis cache. It will take some time to create your Redis database.
5. Add a new configuration in the worker
since we have shifted our Redis from local to ElastiCache we need to pass a new configuration to the worker.
// worker.js
const connection = {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
tls: { rejectUnauthorized: false },
};
here we have added password
and tls
because we added encryption in transit.
Building a Docker Container for Your BullMQ Worker
The first step in deploying your BullMQ worker on AWS is containerising the application using Docker.
A Docker container ensures that your worker will run consistently across different environments, making it ideal for cloud deployments.
Let’s create a Dockerfile to build the container.
1. Add a Dockerfile to Your Project
In the root directory of your project, create a new file called Dockerfile
and add the following content:
# dockerfile
FROM node:18
WORKDIR /
COPY package*.json .
RUN npm install
COPY src/ src/
CMD [ "npm", "start" ]
2. How the Dockerfile Works
1. Base Image: The node:18
image is a version of Node.js, ensuring your container is optimized in terms of size and speed.
2. Working Directory: The WORKDIR /app
command sets the working directory inside the container where your application will reside.
3. Copy Dependencies:
- The
COPY package*.json ./
command copies yourpackage.json
andpackage-lock.json
files into the container.
4. Install Dependencies: The RUN npm install
command installs the necessary packages.
5. Copy Source Code: The COPY src/ ./src
command copies your source files (such as worker.js
) into the container.
6. Run the Worker: The CMD ["npm", "start"]
command runs the worker using the npm start
script, as defined in your package.json
.
This setup ensures that your worker runs efficiently in a Docker container, providing consistency whether you’re running locally or in production on AWS.
Pushing your Docker container to Amazon ECR
To deploy your BullMQ worker on Amazon ECS, you first need to push your Docker container to the Amazon Elastic Container Registry (ECR).
ECR is a fully managed Docker container registry that makes it easy to store, manage, and deploy container images.
1. Create a Repository on Amazon ECR
Login to AWS Console: Start by logging into your AWS Console.
Search for ECR: In the search bar, type “ECR” and select Amazon Elastic Container Registry from the search results.
Create a New Repository:
On the ECR dashboard, click the Create Repository button.
Enter a name for your repository. For example, you can name it
bullmq-worker
or any other name that reflects your project.Leave the default settings or customize them as needed, then hit the Create button.
2. Push Your Docker Container to ECR
Once your repository is created, you’ll need to push your Docker image to ECR. Follow these steps:
Select Your Repository: On the ECR dashboard, find and click on the repository you just created.
View Push Commands: Inside your repository, click on the View push commands button.
Execute Push Commands: Follow the instructions provided by AWS ECR to push your Docker container and make sure to execute these commands in the
dir
where you have yourdockerfile
.
Deploying your Queue workers on AWS using ECS
After pushing your Docker container to Amazon ECR and setting up your Redis database using ElastiCache, the final step is deploying your BullMQ workers on Amazon ECS.
ECS (Elastic Container Service) enables you to run and manage your Docker containers efficiently in the cloud.
Let’s go through the process of deploying your workers.
1. Create a Task Definition for Your Worker
1. Search for ECS: In the AWS console, type ECS in the search bar and navigate to the ECS dashboard.
2. Select Task Definitions: From the left sidebar, click on Task Definitions.
3. Create a New Task Definition:
Click the Create new task definition button.
Task Definition Family Name: Enter a name for your task definition, for example,
bull-worker-tasks
.
4. Infrastructure Requirements:
In the Operating System/Architecture section:
Choose Linux/x86_64 if your Docker container was built on Intel or AMD-based machines.
Select Linux/ARM64 if your container was built on Apple M1/M2 chips or ARM-based processors.
5. Container Details:
Container Name: Give your container a name, such as
bull-worker
.Image URI: In the Image URI input field, paste the URI of the repository from Amazon ECR, which you created earlier.
6. Port Mappings:
Add a port mapping for port 6379 (Redis’s default port).
Set the App Protocol to HTTP.
7. Environment Variables:
Add the following environment variables:
REDIS_HOST
: This can be found in the details of your ElastiCache Redis database. Go to the ElastiCache dashboard, select your cluster, and copy the Primary Endpoint.REDIS_PORT
: Set this to 6379.REDIS_PASSWORD
: This is the password you set when creating your Redis database on ElastiCache.
8. Create the Task Definition: After filling in the details, click the Create button.
2. Create an ECS Cluster
1. Navigate to Clusters: In the ECS dashboard, select Clusters from the left sidebar.
2. Create a New Cluster:
Click the Create cluster button.
Enter a name for your cluster, such as
bullMQ-worker-cluster
.Hit Create to finalize the cluster creation.
3. Select Your Cluster: Go back to the clusters homepage and select the cluster you just created.
3. Create and Configure an ECS Service
1. Create a Service: In your cluster’s detail page, scroll down to the Services section and click Create Service.
2. Deployment Configuration:
Scroll down to the Deployment Configuration section.
Under Task Definition Family, select the task definition you created earlier, e.g.,
bull-worker-tasks
.Give your service a unique name, such as
bullMQ-worker-service
.
3. Networking: In the Networking section, select the security group you created for the ElastiCache cluster (bullMQ-group
).
4. Create the Service: Once everything is configured, hit the Create button.
4. Monitor Your ECS Cluster and Worker Logs
Wait for Tasks to Start: Return to your ECS cluster’s detail page and wait until your task starts running. ECS will automatically pull the Docker image from ECR and spin up your BullMQ worker.
View Logs:
- After the task is running, go to the Services section, and click on the name of your service.
- On the service details page, click on the Logs tab to monitor the logs from your BullMQ worker. Here, you’ll be able to see logs for the queue jobs being processed.
Scaling Your Queue Workers and Redis Database on AWS
Now that we’ve deployed BullMQ queue workers and a Redis database on AWS, it’s crucial to understand how to scale them to handle growing workloads efficiently.
With Amazon ECS and ElastiCache, you can scale both horizontally (adding more workers/nodes) and vertically (upgrading instance sizes).
1. Scaling Your Redis Database
Amazon ElastiCache allows you to scale Redis horizontally (adding more nodes or replicas) or vertically (upgrading to larger instance types).
Vertical Scaling for Redis
Vertical scaling of Redis involves upgrading the node type to a larger instance to handle more load. Here’s how you can do it:
Go to ElastiCache Dashboard: From your AWS console, search for ElastiCache.
Select Your Redis Cluster: In the Redis section, select the Redis cluster you want to scale.
Modify Cluster:
Click on Modify at the top of the cluster details page.
Under Node Type, select a larger instance size (e.g., from
t2.micro
tor5.large
).
- Apply Changes: After making the selection, choose whether to apply the changes during the next maintenance window or immediately.
Vertical scaling is best suited for situations where Redis is CPU-bound or memory-bound due to the large number of jobs being processed.
Horizontal Scaling for Redis
Horizontal scaling involves adding more Redis replicas to increase availability and distribute the load.
Go to ElastiCache Dashboard: Open the Redis section in the ElastiCache dashboard.
Select Your Redis Cluster and scroll down to the services section there you’ll find the Add node button click on it then it will open a popup to ask the number of replicas you want to create.
Each new replica will serve read requests, reducing the load on the primary node.
Apply Changes: You can apply these changes.
2. Scaling Your ECS Queue Workers
Scaling the ECS workers horizontally allows you to add more workers that process jobs concurrently, while vertical scaling involves increasing the power of individual ECS instances.
Horizontal Scaling for ECS Workers
To scale your queue workers horizontally, you can add more tasks to your ECS service, enabling multiple instances of your worker container to run in parallel.
Go to ECS Console: In the AWS console, search for ECS and navigate to the cluster that you’ve set up for BullMQ workers.
Select Your Service: Go to the Services tab and select the BullMQ worker service.
Update Desired Task Count:
Click Update Service.
In the Desired Task field, increase the number of tasks (e.g., from 1 to 3).
Apply Changes: Click Update to scale your service.
ECS will automatically spin up more instances of your worker to handle an increased number of jobs in the queue. Horizontal scaling is particularly useful when the queue length increases, as additional workers can process jobs simultaneously.
Vertical Scaling for ECS Workers
Since we have selected the AWS Fargate(serverless) we don’t need to worry about manually scaling the underlying infrastructure (e.g., EC2 instances) because Fargate automatically provisions the required resources for your tasks.
But if you want more control of your infrastructure and you’ve selected the Amazon EC2 instances, then To scale vertically, upgrade the ECS instance type to handle more powerful tasks.
Go to ECS Console and select Clusters.
Select Cluster Capacity Provider: Choose the Capacity Provider associated with your cluster.
Modify EC2 Instance Type:
In the Auto Scaling configuration, you can change the instance type (e.g., from
t3.medium
tom5.large
).Update Service: Apply changes to your ECS service for the updated instance type to take effect.
Vertical scaling is best used when your worker instances need more CPU or memory resources to process individual jobs faster.
Bonus:
In this article, we’ve explored how to set up a scalable queue worker infrastructure on AWS using BullMQ. Exciting, right? But here’s the thing — if you noticed, we manually scaled the workers and Redis instances. So, you might be wondering: what about auto-scaling? Don’t worry, I’ve got you covered!
In my next article, I’ll dive deep into how to implement auto-scaling for your queue workers on AWS, making your infrastructure truly hands-off and ready to handle spikes in traffic automatically. Whether you’re a beginner eager to learn or a seasoned pro looking to fine-tune your setup, this guide will have something for everyone.
So, if you want to master auto-scaling with BullMQ and AWS, be sure to subscribe and follow me on LinkedIn — you won’t want to miss this!
Stay tuned for the next chapter in building scalable architectures with BullMQ!
Follow Me for More Insights
If you found this guide helpful and want to learn more about web development, Software Engineering, and other exciting tech topics, follow me here on Medium. I regularly share tutorials, insights, and tips to help you become a better developer. Let’s connect and grow together in this journey of continuous learning!
Connect with Me
Twitter: https://twitter.com/Bhaskar_Sawant_
Stay tuned for more articles, and feel free to reach out if you have any questions or suggestions for future topics.
Cheers, thanks for reading! 😊
Happy coding! 🚀
This article was originally published on Medium. It is also available on Hashnode and Dev.to.
Top comments (0)