BullMQ Redis™ Module
I am excited to announce that I started the development of a Redis™ Module that implements BullMQ in Zig. You can access it here: https://github.com/taskforcesh/bullmq-redis
I had this idea since Redis modules where announced back in version 5.0, as it provides the means to resolve several limitations in BullMQ that are either very difficult or just impossible to resolve as a layer on top of Redis, and it opens the door for interoperability between other languages to use BullMQ, instead of being exclusively available for NodeJS.
In this post I would like to present some of the improvements of the module compared to the current BullMQ implementation. They may not be so many, but for me it is a big breakthrough and these are just the tip of the iceberg of what I am planing for future releases.
You may also wonder why Zig instead of plain C which is the language Redis is actually written in. The reason is pretty simple, I think Zig has enough improvements over C to make the coding more pleasing, simplify memory management and ultimately help in making a more robust software, plus I thought it was a good excuse to learn something new after all these last years working in the NodeJS ecosystem.
So let's review some of the advantages of using a Redis module instead of Lua scripts as in the current NodeJS implementation.
Blocking Commands
Probably the most important feature that opens a full set of improvements is the fact that you can write blocking commands as complex as you need. When using Redis commands you are limited to a handful of blocking commands, and Lua scripts do not support them. For reliable queues BRPOPLPUSH is a useful command and the one we use in BullMQ in order to implement robust queues. However this command is too simple for the kind of features that are provided by BullMQ, for example, as soon as we start to process a job we need to lock it so that the job is not moved back to wait as stalled. Solving this in Bull requires extra mechanisms in order to avoid hazard conditions. With a module this becomes trivial, we just lock before returning from the blocking command.
Locking may not be so exciting, but let me tell you about delayed jobs. In BullMQ (as well as in older Bull), we need to have a separate connection to Redis that listens to a special delay event. Based on these events we schedule a call to a lua script that moves all delayed jobs that have “expired” back to the wait list so that they are processed by the workers. This logic for handling delay events requires a special mechanism to move jobs from the delay set back to the wait list. This works with the blocking command that blocks while there are no jobs in the wait list, but as soon as a delayed job is moved back to the wait list, the blocking command will complete moving the job to the active list and processing can start. This is feature is currently accomplished with the special class “QueueScheduler”.
With the new Redis Module we can get rid of the QueueScheduler class all together. Now our blocking command “QNEXT” will return the next job to process in the queue, delayed or not, and block until such a job is available.
In BullMQ for node, when it is time to process a delayed job, the job is moved to the head/front of the wait list so that it is executed as soon as possible. But this mechanism is not perfect, if you have many delayed jobs close together the oldest ones will jump the queue, which makes it quite impossible to give any guarantees about job order. Since QNEXT will always return the next job, order is maintained independently of how many delayed jobs you have or how close they are to each other.
What about working with batches of jobs? Here we have the same problem, since the blocking command in redis can only move one element per call it is not really possible to implement batch support, at least not optimally. With the new module you can just call “QNEXT” with a max batch size and it will return a complete batch of jobs in one call.
Finally, another cool improvement is the code for handling stalled jobs. Like in BullMQ, if a worker is working on a job for longer duration that the lock, the worker must either re-new the lock or the job will be moved back to the wait list. This process is implemented in bull using a polling mechanism, which is also implemented in the QueueScheduler class. As mentioned before this class is not needed anymore, now the expiration of the locks is detected automatically using Redis own keys notification system and the jobs are moved back to the wait list when needed. This is done inside the module, no need for external polling anymore.
Performance
The great thing about implementing the queue as a module is that we can maximize the performance. Since all code is compiled with the highly efficient Zig compiler there is not the extra overhead to pay for executing lua scripts, and the code is basically as fast as if the queue was implemented natively in the Redis codebase. In my initial testings I could process 50k jobs/second on my i7 laptop with one single client.
Interoperability
The module is being implemented to be fully compatible with current BullMQ, so independently of using the NodeJS library or the commands provided by the module you will get the same results. This opens of course the door for using BullMQ from any language/platform as long as they support a Redis client. In time I expect wrappers built in the most popular languages that will make the use of the queue more pleasant than just calling the commands manually.
License
I chose the permissive BSD license for this module, so it is free to load on any Redis instances independently of where they are hosted (as long as they support modules).
Future work
Currently my efforts are directed to make the module compatible with all the features that exist in the NodeJS version. For this to be completed, the major thing missing is the rate limiter, which is going to work much more reliable and performant than current NodeJS version. After that I aim to release version 1.0 and make the module a drop-in replacement for users of the NodeJS library, making your current code just faster and better with minimum effort.
After 1.0 I have planned a lot of really powerful features that are only possible to implement as a module, and that will allow you to solve really complex use cases, but this I will leave for another post.
You are free to start testing the module, the releases are built automatically as new features are added so just go to the releases page in github and grab your copy.