In this tutorial we are going to demonstrate how to use the new telemetry support thats built-in in BullMQ by creating a simple newsletter's registration application.
In order to show the potential of telemetry, we want an application that is composed of several components that interact with each other. In this case we will have a standard ExpressJS server to handle HTTP requests for registering the newsletter's subscriptions, a BullMQ queue to handle the registrations and a postgreSQL database to store the subscribers and their statuses.
The idea is being able to see the requests all the way from the HTTP request up to the update of the database.
Why should you read this?
If you're using BullMQ and want to improve your application's monitoring and troubleshooting capabilities, this blog post is for you. The new telemetry functionality and this guide will save you time and effort by providing a streamlined approach to instrumenting BullMQ with OpenTelemetry.
What you will learn:
OpenTelemetry basics: Get a quick overview of OpenTelemetry and its core components (tracing, metrics, logs).
Introducing the new package: Explore the features and benefits of the new package that simplifies OpenTelemetry integration with BullMQ.
Step-by-step guide: Follow a practical tutorial to instrument your BullMQ queues and workers with OpenTelemetry.
Impatient? Jump to the solution:
If you prefer to dive straight into the code, check out the GitHub repository. Each chapter of the following tutorial is a separate branch and it will be linked accordingly.
With the project's foundation in place, let's set up the necessary files, including routes, controllers, and services. First, install the required dependencies:
@types/express
express
Now, let's populate the index.ts file with the following code:
This is the configuration file where we'll store all the essential values for the project:
Router inside routes folder:
Controller inside controllers folder:
Service for future logic inside services folder:
Next, create an .env file to store the application's port:
The only remaining step is to add a script to run the application. In your package.json file, within the scripts section, add:
We now have a basic project structure for newsletter subscriptions. This includes a dedicated route, a controller to manage various service outcomes, and an empty service that we'll implement later with the core logic.
To implement the service, we need a way to send emails. We'll use Nodemailer for this purpose, as it's user-friendly and offers a free testing account where you can inspect outgoing emails through their UI. Guide for that is here!
Install:
@types/nodemailer
nodemailer
To connect to Nodemailer, add the following values to your .env file, referencing the guide above for specific instructions:
Import them with config file:
With our Nodemailer credentials in place, let's integrate it into our application. Create a new directory named nodemailer within the src directory, and inside it, create an index.ts file:
This code will enable our service to send emails. Let's update the service accordingly:
With these changes, sending a POST request to /api/newsletter/subscribe with an email field, or to /api/newsletter/unsubscribe, will trigger a response and generate an email notification visible in the Nodemailer UI.
To persist user data, we'll utilize PostgreSQL. Install the following dependencies:
pg
typeorm
@types/pg
Next, create a docker-compose.yaml file in the project root to run the database:
This configuration exposes the database port for external access and retrieves configuration values from the .env file.
Run docker-compose up to ensure everything is working correctly. If the database starts successfully, proceed to create the model, CRUD operations, and setup file.
And update main entry file for the project index.ts to initialize a postgres:
This code defines a simple model with an email field and CRUD operations to create, read, and delete users for newsletter management. Now, let's update the service to utilize these functionalities:
Finally, let's integrate BullMQ. Install the necessary dependency:
bullmq
Next, add a Redis service to your docker-compose.yaml file:
To integrate queueing into our application, we need to make a few modifications. First, let's create an interface for Nodemailer. Create a new directory named interfaces:
Next, modify the index.ts file within the nodemailer directory to export a job for processing emails:
In the same file, create a worker to consume the job. We'll add some helpful events and console logs to verify that everything is functioning as expected:
And modify service to use our queue system:
Additionally, create a file to initialize the worker:
To run the worker, add a new script to your package.json file:
With the queue system integrated, our application now stores jobs in Redis and processes them using the worker. You can run the application using the scripts we defined. First, start the worker:
npm run start:worker
Once the worker is initialized, start the main application:
Let's move on to the main part: integrating telemetry. First, install the required packages:
@opentelemetry/instrumentation-express
@opentelemetry/instrumentation-http
@opentelemetry/instrumentation-ioredis
@opentelemetry/instrumentation-pg
@opentelemetry/sdk-metrics
@opentelemetry/sdk-node
@opentelemetry/sdk-trace-node
bullmq-otel
bullmq-otel is the official library for seamlessly integrating OpenTelemetry with BullMQ. I'll demonstrate how to use it and create a setup file for OpenTelemetry shortly.
To begin, we need to update all instances where queues and workers are initialized, adding a new option to pass the BullMQOtel class:
Next, let's set up OpenTelemetry. Create a new directory named instrumentation inside the src directory.
Within the instrumentation directory, create two files: producer.instrumentation.ts and consumer.instrumentation.ts. These files will handle instrumentation for producers and consumers, respectively:
This code configures OpenTelemetry with a basic setup. We've assigned a service name for easy identification of trace origins and enabled console output for all telemetry data. Additionally, we've included automatic instrumentation libraries for various parts of our application: HTTP, Express, PostgreSQL, and Redis.
You might wonder how this differs from the bullmq-otel instrumentation. The key distinction is that these libraries utilize monkey patching to observe the application, while bullmq-otel doesn't. Observability in BullMQ is achieved through direct source code integration, providing greater control, flexibility, and maintainability. Rest assured, these libraries work together seamlessly.
To utilize the instrumentation files, update the starting scripts in your package.json file:
Now, run the application as before. You should see OpenTelemetry spans displayed in the console.
While the additional logs provide some insight into BullMQ's internal operations, to fully leverage the power of observability, we need a centralized system for storing and visualizing traces. This will allow us to generate insightful diagrams and understand the precise execution order of different application components. OpenTelemetry's popularity brings a wide array of options, both commercial and open source, for achieving this.
Using the console alone for observability can be overwhelming and difficult to interpret. A more effective approach is to utilize a dedicated tool like Jaeger, specifically designed for trace storage and visualization.
To export spans to Jaeger, we need to install the necessary packages:
@opentelemetry/exporter-metrics-otlp-proto
@opentelemetry/exporter-trace-otlp-proto
Now, update your docker-compose.yaml file with a new Jaeger service:
This configuration exposes two ports:
4318: The endpoint for exporting traces in protobuf format.
16686: The port for accessing the Jaeger UI.
Note that I'm using protobuf as the data format for Jaeger, which requires exposing port 4318. If you're using a different method, such as HTTP, you'll need to expose the appropriate port for that format.
Now, let's update the instrumentation files:
Here, we've replaced the console exporter with OTLP, directing the telemetry data to Jaeger.
Run the starting scripts again and navigate to http://localhost:16686 in your browser.
In the left-hand menu, you'll find an option to view traces from various services. You should see at least two services listed: producer and consumer. Select producer and click Find Traces to view the observed BullMQ operations.
This view displays comprehensive information about the observed application components, starting from the initial HTTP call, through Express and PostgreSQL, to BullMQ and Redis.
The left-hand menu allows you to filter traces by their origin, such as by consumer:
...or by other operations specific to your application:
One of the powerful capabilities of telemetry is the ability to gain insights into various parameters passed within your application. This can be invaluable for debugging and further development:
Furthermore, if any errors occur, they will be displayed as special events within the span. You can expand the Logs section of a span to view detailed information about the error:
It's crucial to remember that spans must be explicitly ended before they are sent to the telemetry backend. This means that if a worker process crashes mid-process, preventing the span from being closed, the entire trace will be lost and won't appear in Jaeger (or any other telemetry system you might be using).
For instance, if a worker dies in our example application, the complete trace, from the initial HTTP call to the Redis operation, will not be saved. This is a critical limitation to be aware of to avoid misinterpreting missing traces.
And there you have it! You've successfully built a simple BullMQ application with OpenTelemetry integration.
Follow me on twitter if you want to be the first to know when I publish new tutorials
and tips for Bull/BullMQ.
And remember, subscribing to Taskforce.sh is the
greatest way to help supporting future BullMQ development!