BullMQ 3.0 released

BullMQ 3.0 released

We have just released a new major version of BullMQ. And as all major versions it includes some new features but also some breaking changes that we would like to highlight in this post. If you are using Typescript (as we dearly recommend), you will get compiler errors if you are affected by these changes.

Better Rate-Limiter

The rate-limiter provided by the previous versions of BullMQ was functional and did its job. The rate-limiter was based on delayed jobs, so as soon as a queue became rate-limited, all the jobs would be moved to the delay set with a carefully calculated delay so that they would be picked up as soon as the rate-limiter had expired.

This approach worked for many scenarios, and also made possible the implementation of a rudimentary rate-limiter based on group ids. However, we were never fully satisfied with this solution. It seemed unnecessary to move jobs from one set to another, which in some degenerate cases would result in very high CPU usage due to movements of jobs back and forward between these two sets.

Instead, we knew that the optimal solution would be to not perform any work as long as a queue was rate-limited. The problem was with the groups. Since we could not know if there were groups far away in the wait list, we could not just stop processing the queue, as it could be possible that there were jobs in groups that were not rate-limited yet.

In the end, we came to the conclusion that it was better to sacrifice the group based limiter in favor of a much better and robust global rate-limiter. The group rate limiter will still be available in a much better form and with more functionality in the Pro version.

Dynamic Rate-Limiter

A highly requested feature has been to be able to dynamically activate rate-limiting in a queue based on some condition that happened during the processing. For example, we can now activate the rate-limiter if an HTTP response included the 429 (Too Many Requests) code:

const rateLimitWorker = new Worker(
  "rate-limit",
  async (job) => {
    const result = await callExternalAPI(job.data);
    if (rateLimited(result)) {
      await rateLimitWorker.rateLimit(result.expireTime);
      throw Worker.RateLimitError();
    }
  }, { connection });
Example of a dynamic rate-limiter.

You can read this blog post with more details on how to implement rate-limiters with BullMQ.

Better typing for Backoff strategies

We made a small change in the way backoff strategies (how the delay is calculated when a job fails and should be retried) are defined.  Instead of passing an object with the strategies keyed by strategy name, we have simplified it so you only pass a function callback.

const myWorker = new Worker("test", async () => {}, {
  connection,
  settings: {
    backoffStrategy: (attemptsMade: number) => {
      return attemptsMade * 500;
    },
  },
});
New way to define custom backoff strategies.

Replaced "cron" by the more generic "pattern"

Since the introduction of custom repeat strategies, we have used the option "pattern", that is generic for any kind of strategy. With this new version of BullMQ we finally remove the "cron" option, so if you are still using it, please just rename it to "pattern" when migrating to version 3.0.

await myQueue.add(
  'submarine',
  { color: 'yellow' },
  {
    repeat: {
      pattern: '* 15 3 * * *',
    },
  },
);


Follow me on twitter if you want to be the first to know when I publish new tutorials and tips for Bull/BullMQ.

And remember, subscribing to Taskforce.sh is the greatest way to help supporting future BullMQ development!