DEV Community

Cover image for Elixir Logger and the Power of Metadata
David Antaramian for Timber

Posted on • Originally published at


Elixir Logger and the Power of Metadata

Originally published on

In this post, we look at the ways in which Elixir's Logger can produce incredibly useful logs by adding contextual metadata.

Logger is one of the OTP applications distributed with the Elixir core. It is a powerful but elegant logging framework that is able to withstand the pressure of a production environment with little overhead to the processes producing logs. It is also incredibly extensible, providing ways to manipulate and enhance logs prior to their final presentation.

When an Elixir process produces a log, Logger handles transporting the log data asynchronously to a list of backends. A backend is responsible for writing out the data. Elixir projects will use the :console backend by default which prints the log messages to STDOUT.

Logging Metadata

Unlike many default logging packages that ship with languages, Logger treats log metadata as a first class concern. In its most basic usage, metadata is useful for grouping log lines about the same subject together. For example, you can set the request ID of an HTTP request as metadata, and all the log lines about that HTTP request would show that request ID.

Logger metadata is a keyword list stored in the process dictionary, so each process has an isolated metadata store. This means that setting metadata in one process won't affect the metadata in another process. The keyword list is copied whenever you log a message and sent alongside the message to every Logger backend. The backend can choose how it handles the metadata. We'll use the default :console backend in the examples below since it's so easy to get started with.

Configuring :console

The :console backend makes it easy to get up-and-running with metadata. The default configuration is to print metadata immediately before the log level. You just need to whitelist the metadata keys you will use in your config/config.exs:

config :logger, :console,
  metadata: [:request_id]
Enter fullscreen mode Exit fullscreen mode

In the snippet above, the :console is configured to print out the :request_id metadata; any other metadata will be ignored. In Elixir 1.5, the special setting :all was introduced which lets the :console backend print out all metadata:

config :logger, :console,
  metadata: :all
Enter fullscreen mode Exit fullscreen mode

Metadata whitelisting is unique to the :console backend. Other backends won't require it.

Adding Metadata for Later

There are two ways to add metadata to your logs. The first is to use the Logger.metadata/1 function. This function takes a keyword list and merges it into the existing metadata. This works great when you generate a piece of data you know will be useful later on, like the HTTP request ID described above. In fact, the Plug.RequestId module does this when setting the request ID:

defp set_request_id({conn, request_id}), header) do
  Logger.metadata(request_id: request_id)
  Conn.put_resp_header(conn, header, request_id)
Enter fullscreen mode Exit fullscreen mode

Wehn Logger.metadata/1 is called in the function above, every log statement in the same process will have that request ID as part of the metadata! Let's assume we're handling an HTTP request with request ID bcc1fe and the plug has already handled setting up the metadata. Now, we log the following message during our business logic:"Updated profile for user ##{user_id}")
Enter fullscreen mode Exit fullscreen mode

The final output on the console will look like this:

22:57:14.199 request_id=bcc1fe [info]  Updated profile for user #32
Enter fullscreen mode Exit fullscreen mode

Adding Metadata for a Single Line

The second option is to add metadata at the point of logging. Using this option, the metadata is not persisted, so it is only used for that log line. This can be really useful when you have something that's only pertitent to the line itself, like a payment confirmation code:

defp handle_payment_success(p = %Payment{confirmation_code: c}) do"Successfully processed payment", confirmation_code: c)
Enter fullscreen mode Exit fullscreen mode

Here, the log message indicates that the payment was successful and gives the confirmation code as metadata. Your resulting log line will be similar to the following:

22:58:43.040 confirmation_code=9184210 [info]  Successfully processed payment
Enter fullscreen mode Exit fullscreen mode

Or, if you had set the HTTP request ID in the previous step:

22:58:43.040 confirmation_code=9184210 request_id=bcc1fe [info]  Successfully processed payment
Enter fullscreen mode Exit fullscreen mode

It's brilliantly simple, yet offers an extremely powerful way to enhance your logs.

Metadata in Practice

Metadata is an extremely powerful way to enhance your logs, but using it effectively can be difficult. If you add too much metadata, you will start generating too much noise that you don't pay attention to, especially when you have a large amount of logs. Don't forget, in addition to the logs you are already generating, you also have logs generated by third-party libraries you rely on. They'll pick up on the metadata you've set and possibly add their own. Coalescing all these logs into usable information requires a game plan.

Start by thinking about your data across time. When fulfilling a request, what relevant information is always going to be the same across all log lines? This is the type of metadata you will want to set ahead of time as metadata using Logger.metadata/1. As your application progresses, each common data point acts like a string tying log lines together.

Consider the initial example we used when setting up setting up metadata. The HTTP request ID is useful when you receive a request and when you respond to it, but it's also useful for all the steps in between. It links together the log lines that deal with fulfilling this request.

If you deal with a system that is user-focused, you might also introduce the user's ID as metadata. This becomes another point that links log lines together, and instead of linking them across a single request, it will link them together across time. In the same vein, imagine you start a Task to asynchronously update the image for a product in your ecommerce catalog. You can use the product ID as metadata to associate all the log lines related to that update task (as well as any other actions related to that product ID).

Not all information is relevant all the time, though. Some information is only relevant to a specific step, and this is the metadata you should set at the same time you produce the log statement.

This is the case with the payment confirmation code example. The confirmation code is only useful for the payment processing step (in most cases), therefore, it makes sense that you only add it to the log statement it relates to. Likewise, the time it took to handle a database query could also be added as metadata. The same is true with the status code of an HTTP response. These values are most relevant to their individual log statements, and by setting them as metadata, they become easily surfaceable when browsing your logs.

How relevant data is across multiple log lines is dependent on your business needs. You may find that for your business it's appropriate to set the customer's ID as metadata that associates log lines. Another business might only record the customer ID on specific actions, like checkout. If you start thinking about why you are looking at logs and how you determine that a log statement is relevant, the data points become evident without much work.

Experimenting with metadata is the easiest way to get started, and there is very little work involved. You can improve and refine your metadata usage gradually over time until it meets your needs.

In a future post, we'll look at how Timber builds on metadata and why Elixir Logger's built-in support allows you (and us) to do amazing things once the metadata hits the backend.

Top comments (0)

An Animated Guide to Node.js Event Loop

Node.js doesnโ€™t stop from running other operations because of Libuv, a C++ library responsible for the event loop and asynchronously handling tasks such as network requests, DNS resolution, file system operations, data encryption, etc.

What happens under the hood when Node.js works on tasks such as database queries? We will explore it by following this piece of code step by step.