Cover image for Ratlog.js – JavaScript Application Logging for Rats, Humans and Machines

Ratlog.js – JavaScript Application Logging for Rats, Humans and Machines

jorinvo profile image jorin Originally published at jorin.me Updated on ・3 min read

I am unsatisfied whenever I have to look at the logs in a Node.js project.
Ratlog is an attempt to fix this.

The typical way of logging in Node.js is to use Bunyan or Winston.
Both of them are mature libraries and they come with lots of options:

You can configure transports to write logs to different locations, you can specify the output format in flexible ways, many data points such as timestamps and log levels are included by default.

They use JSON as output format to be flexible and compatible with other platforms. They also offer neat CLI-tools to pretty-print the JSON output when viewing the logs.

But all this flexibility makes working with logs cumbersome.

These features might be helpful to many people in many scenarios. But if you don't have huge infrastructure around logging to centralize and query them, these features are in your way.

I just want to see what's going on in my application.

  • stdout is the right place for logs. If I want them somewhere else, I can redirect stdout.
  • I don't want the log format to be configurable. I want it to be consistent.
  • I don't want to have to use another tool to read the logs. They should be readable right away. JSON is not readable.
  • I don't even want stuff like log levels and timestamps. When I read logs it's mostly via tools such as docker or journalctl. They already collect all meta infos such as timestamps, host info, service name, ...

And even though those libraries have many features and include a lot of meta information, they don't help me with tools to structure the actual logs.

So what I came up with is Ratlog.

Ratlog is a specification of a logging format that is focused on being readable, informative, simple and still machine-parsable.

Additionally I created Ratlog.js, a really simple JavaScript logging library, which supports the Ratlog semantics to make it simple to create helpful logs.

Let's have a quick look at some output of a basic example application:

app starting
[counter] starting
[counter] started
app ready
[counter|event] counting | count: 1
[counter|event] counting | count: 2
[counter] stopped
app shutting down

Reading the output of a service might look like this:

$ docker logs -t myapp
2018-03-29T11:10:29.116Z [file-import|warning] file not found | code: 404 | path: /tmp/notfound.txt
$ journalctl -u myapp
Apr 13 22:15:34 myhost myapp[1234]: [http|request|error] File not found | code: 404 | method: GET | route: /admin

You can use all the default Unix tools to filter, query and manipulate the output:

$ journalctl -u myapp | grep '\[|\|warn' | less

Logs consist of message, tags and fields:

  • The message is the most basic element of a log.
  • It can be put into context using tags. Tags can be used in flexible ways to signal log levels, severity, component hierarchy and many other properties.
  • Additionally logs can be augmented with more data using fields.

How do you get started logging?

  • Install the ratlog NPM package
npm i ratlog
  • Starting logging
const log = require('ratlog')(process.stdout)

log('hello world')
// => hello world

// Add fields
log('counting', { count: 1 })
// => counting | count: 1

// Add fields and a tag
log('counting', { count: -1 }, 'negative')
// => [negative] counting | count: -1

// Create another logger bound to a tag
const warn = log.tag('warning')

warn('disk space low')
// => [warning] disk space low

// Combine and nest tags any way you like
const critical = warn.tag('critical')

critical('shutting down all servers')
// => [warning|critical] shutting down all servers

You can find out more about Ratlog.js and the Ratlog Spec on GitHub.

I would be really glad about hearing your thoughts, critique and feedback!

It is also simple to implement a Ratlog-compatible logger or parser in any other programming language.
There is a JSON test suite so you don't have to bother writing tests.


Editor guide
jillesvangurp profile image
Jilles van Gurp

A few questions:

  • How does this interact with centralized log aggregators? If you do microservices/cloud deployments without this, you are doing it wrong and you are running blind as a bat.
  • Does it have something like a Mapped Diagnostic Context? Look it up, this stuff is awesome. You'll be wondering why you never used this before. E.g. all our application logs could include fields that indicate request method, path, implementation method, user id (if applicable), host, user agent, etc. Meaning we can slice and dice our centralized logs across microservices using these fields. If I get a user complaining about some weird error, I can dig out exactly what he did when from millions of log messages across many micro-services and applications.
  • How does this play with stuff that has it's own notion of logging? If you can't control how third party stuff logs, your logs are going to be a mess.

In the Java world there are many logging frameworks but they tend to play nice with each other by abstracting and encapsulating each other's APIs. E.g. I can use logback to send logs to logstash/elasticsearch in json format (using one of several formatter plugins) with MDC fields from my mapped diagnostic context added. Meanwhile parts of my application logging using log4j, commons-logging, or java's built in logging get picked up by slf4j and redirected to logback. Also, their various MDC implementations play nice with each other. I can do something similar with log4j 2.0 and several other framweorks. All this means is I have a a few jar files on the classpath and loggers declared using any of the above frameworks will end up being picked up by logback when it writes the logs. We have production and dev configs for this so we can get human readable output when running tests and output optimized for logstash when running in production.

jorinvo profile image
jorin Author

hi @jillesvangurp , great questions!

  • As you probably can tell, the main use case of Ratlog is to have a format which is directly consumable by humans so it might not be the best fit for a cloud-based setup.
    If you don't look at the logs directly, you can also output JSON and query it through some central service such as Elastisearch.
    But if you like to have both, it is totally possible to forward Ratlog logs to a central system and process them further. The format is not only human-readable, but also easy to parse by machines.
    I already tried to mention this here but I think I need to elaborate on it to be more clear about the goal of Ratlog and about which problems it might not be the best solution for.

  • Ratlog has more structure than plain-text output, but still very little compared to most data formats. Using Ratlog's fields you could store information coming from Mapped Diagnostic Context, but it doesn't have any opinions of what to collect by itself. You can implement that in libraries built on top of Ratlog. I think this definitely an important part of logging and I'm thinking about creating some sort of common style guide for naming tags and fields and which ones to collect for which sort of service (see #6).

  • In the systems I get to work with I am mostly lucky enough that we can control which libraries to use and we can decide where their output goes. As I tried to mention here libraries should not write logs themselves, but leave all output to the application logic.
    When working with existing systems or with external tools, it get's very difficult to enforce any kind of common log format. I wouldn't expect sshd, Nginx, Postgres and my own application code to output logs in the same format. Personally I don't know about the Java ecosystem; maybe the tooling there already agrees on common logging solutions - which is great. But then as explained in the first point above, this is not the problem scope Ratlog is trying to solve.

  • The way you explain the state of things in the Java ecosystem sounds pretty sophisticated and seems like there logging is mostly a solved problem. In other environments like Node.js it is still wild west and tools don't work together nicely.
    The Ratlog.js library is a very simple logging solution for situations where all you want is simple logs that are easy to understand.
    However the Ratlog format itself is independent from the JavaScript library. As you mention that logback already gives you human-readable output it might not be all that useful in that scenario, but you can see it as an alternative output format for that functionality, which is intended to be read by humans.
    I guess in Java it wouldn't make any sense to implement a Ratlog library from scratch, but instead one could configure, for example logback, to output already existing logs in the Ratlog format.

Thanks for all the input!

jillesvangurp profile image
Jilles van Gurp

Great, I wrote the original reply from a point of view of taking care of centralized logging in a polyglot environment (including node.js) and being quite depressed with how people seem to be stuck reinventing doing logging the wrong way.

The way you need to think is in terms of many independent components that each generate logs that need to be captured somehow. You use stuff like Elastic Beats, logstash, syslog, etc. to collect logs from different sources and send them to remote systems where they get aggregated, analyzed and stored. Each application typically runs in a docker container. Docker comes with its own ecosystem of logging stuff (some of it quite crappy, beware). The bare minimum is simply capturing stdout and pumping it to some remote aggregator annotated with some metadata about where it came from for. For example logstash comes with a lot of functionality to help you make sense of all the crap you get from legacy systems doing whatever on stdout (e.g. nginx access logs, mysql logs, systemd, syslog) and pick that apart. A lot is lost in translation this way. The best way is to not lose stuff and output in a format that your remote aggregator understands, which is typically json or some well defined line based format like syslog.

What you built is basically a custom log formatter. What happened in the java world is that logging apis you use in your code were separated from log output generation pretty early on. That's a good idea, that needs adopting.

So, it is easy to switch any Java application to whatever output format you need and people do that all the time. Most other languages seem to not do that and so you end up with basically some hardcoded syntactic sugar around puts in ruby or console.log in javascript and log output that cannot be customized and lacks usable context. The best you can hope for is doing some application specific parsing of the output to grab things like timestamps and log levels. Garbage in, garbage out, basically.

So, if you want to facilitate your stuff being used in a cloud based environment, you want some option to output json instead of human readable stuff and be able to switch between both. It's OK to output that to stdout or stderr. In a dockerized environment that is both acceptable and easy to deal with. You get the best of both worlds.

Thread Thread
jorinvo profile image
jorin Author

I guess I need to be more clear when describing the reasoning behind and the goal of Ratlog.
The setup you are describing is definitely the right choice for cloud-scale infrastructure. There JSON output is a good choice for further processing and aggregation. That's why winston and bunyan do things the way they do. They are definitely a better choice for logging in a big system with centralized logging.
They also provide support for other formatting options than JSON so one could implement a pretty-printed version - maybe even using Ratlog.

Ratlog is custom format (mostly) for being consumed directly by humans. It doesn't need processing to be readable. Not all software is cloud-scale. I saw the need for Ratlog because we used these complex logging tools and in the end we were reading JSON manually. We don't want to have to setup a centralized logging system when running a simple system on a single machine.

The other issue Ratlog solves for us is giving us the right context for logs using tags and fields - kinda like what you described with Mapped Diagnostic Context.

You are definitely right, that this is still missing in the JS ecosystem. It would be nice if also in cloud-based scenarios logging libraries give you more tools to add context to logs in a standard way.
That's also an interesting problem. But not one Ratlog is trying to solve.

Thread Thread
jillesvangurp profile image
Jilles van Gurp

The point is that Ratlog is about output, not about the log api itself. These should be independent. So, why not separate those; like other frameworks do. So you have 1 way to log and multiple ways to target either human readable output or machine readable output. That would solve a real problem in the node.js world where most logging frameworks are naive and simplistic. I'd love to get better logging from node.js.

Thread Thread
jorinvo profile image
jorin Author

Just stopping by to let you know I appreciate the feedback you gave!

While I like to keep Ratlog.js as simple as possible to use for the simple use case, I realize the value of Ratlog's logging semantics separate from its output format.
I extended the API in a way which allows using those semantics independent from the output format and which should make it easy to use it on top of other logging frameworks, libraries and services:

const log = ratlog.logger(log => {
  process.stdout.write(JSON.stringify(log) + '\n')


const debug = log.tag('debug')

debug('debugging only')

You can find more in the API docs

mattdevio profile image
Matt G

I don't buy the premise that using a library such as Winston is difficult to configure and that features get in your way. Also, you are not forced to write Winston logs as JSON, I am not sure where you got that from. With that being said...

Its a good project and I think you should continue with development. I personally would re-think the way your handling log levels. Seems like a feature that should be available out of the box. Check out RFC 5424 (the syslog protocol) if you haven't already.

wmhilton profile image
William Hilton

I love it! It's just structured enough that I might use this instead of console.log.

jorinvo profile image
jorin Author

Let me know if anything is confusing or you have any ideas on how to make logging even simpler to use :)

wmhilton profile image
William Hilton

Will do! In fact, I just filed an issue on the Github repo. ;) A rather comical beginner mistake I made that could be worked around with a bit of typeof checking.

I just want to add that it is super cool that tags work with chalk:

const log = require('ratlog')(process.stdout)
const warn = log.tag(chalk.red('warning'))
log('Normal log')
warn('Warning log')


hmm having trouble getting this screenshot embedded.

Thread Thread
jorinvo profile image
jorin Author

oh neat! I didn't know about this :) Would probably a helpful thing to add this as a note in the docs

Edit: I added this as a note in the readme :)

mariosant profile image
Marios Antonoudiou

Hey jorin, this is an amazing concept! Well done!