I recently encountered an issue with a background command running on cron. While the command functioned as expected in my local environment, it exhibited unexpected behavior in our stage environment, where some records were processed multiple times.
// The command structure is something like that
const records = await RecordsDAO.find({ status: 'pending' });
for (const record of records) {
// process it
}
Understanding the Issue
Upon investigation, I discovered that our deployment setup, which includes horizontal auto-scaling via CI, had multiple instances of the command running concurrently. This led to contention over the same data, resulting in duplicated processing.
Implementing a Solution
To address this challenge, I drew upon my recent studies of MySQL through an excellent course by Aaron Francis on PlanetScale. In the course's "Examples" section, a method was presented to ensure exclusive ownership of records during processing.
By introducing a new column called owner_id
, populated with unique identifiers such as process IDs or random UUIDs, each instance of the command can claim ownership of distinct sets of records. Here's a simplified TypeScript snippet demonstrating the implementation:
async updateBatchIdOnPendingRecords(): Promise<UUID> {
const batch_id: UUID = randomUUID();
await this.recordsDAO.updateMany(
{
status: 'pending',
batch_id: null,
},
{
$set: {
batch_id,
},
},
);
return batch_id;
}
Testing the Solution
To validate this solution without redeploying to the stage environment, I utilized the --scale
flag in Docker Compose. I had never used this flag before so I learned some things.
- The
container_name
option cannot be used when scaling containers, as each instance must have a unique identifier. - When a service exposes a port, a range must be allocated to ensure that each container receives a unique port assignment.
Here's a snippet from my updated docker-compose.yml
file:
version: '3'
services:
node:
# container_name: records-node
build: ./docker
ports:
- '3001-3009:3000'
# - '3003:3000'
volumes:
- ./:/var/www/app/
env_file:
- .env.example
working_dir: /var/www/app
networks:
- default
extra_hosts:
- "host.docker.internal:host-gateway"
Scaling the service with three containers is achieved using the following command:
docker compose up --scale node=3
Conclusion
I really love learning new things and applying them. The owner_id
column solved the problem, and it was an easy and fast implementation. While there are simpler ways to test solutions, exploring new methods enhances the learning process and deepens understanding. Of course, the test can be as simple as adding another entry of the command in the cron. However, as I mentioned earlier, I enjoy applying my new learnings, and I was eagerly waiting for an opportunity to use the --scale
flag in a real need.
Thank you for reading! I hope this post has been insightful or enjoyable. π
Top comments (0)