Edit: the instructions in this blog post can be applied to any Log4J configuration, not only Mule applications. I mostly talk about Mule apps as I felt this topic is poorly documented while the need for it is very common in Mule dev teams.
Mulesoft platform is very convenient to manage the whole life-cycle of Mule applications. However, reviewing your application' logs can be unpractical as Anypoint Platform' log viewer is limited and not available for all subscription levels. To address this issue or to better integrate with your organisation log management tool, you might want to configure your applications to output logs in the JSON format before collecting them and sending them to your log management platform.
This blog post will guide you through the changes you need to make to Log4J2 to output logs as JSON and where to configure this in your Mule applications. I tried to explain the various components we are manipulating as I was a bit lost myself when I started to configure my logs.
Why choosing JSON as your output format?
If you ever opened a JSON log file, you probably noticed that logs in JSON format are unreadable. Writing logs as text is much easier to read for humans.
However, in a production environment, you don't want to SSH directly on the server to read the logs. Additionally, your company might be using log centralisation tools such as Datadog or Splunk that are shared between teams and centralise logs in a single tool.
For these tools, parsing of text logs can be tricky. You have to write complicated parsing rules using regular expressions to extract data from the text logs that might be hard to maintain. Additionally, every time you want to add a new information to your log, you'd have to edit the pattern in your Log4J2 configuration and updating your parsing rules to extract the new information.
With JSON Layout, every log is structured in a JSON object that contains not only the log message and its status, but also the context information related to it. With each value in a different field, no parsing is needed to extract the various information about or log as they are all available in dedicated fields.
This is an example of a JSON formatted log:
{
"timeMillis": 1641027600000,
"thread": "WrapperListener_stop_runner",
"level": "INFO",
"loggerName": "org.mule.runtime.core.internal.logging.LogUtil",
"message": "\n**********************************************************************\n* Application shut down normally on: 01/01/22 *\n* 12:00 *\n* Up for: 0 days, 1 hours, 1 mins, 1.000 sec *\n**********************************************************************",
"endOfBatch": true,
"loggerFqcn": "org.apache.logging.slf4j.Log4jLogger",
"contextMap": {},
"threadId": 123,
"threadPriority": 5,
"appName": "my-test-api",
"version": "1.0.0-SNAPSHOT",
"apiType": "my-type",
"service": "my-test-api"
}
Recap about Log4J2
Log4J2 is a logging framework, a Java library that you can extensively configure to output logs in the format and following the layout that suits your needs.
To do so, you need to configure Log4J2 using the format you prefer, such as properties file, XML, YAML, ... For this example, we will use the XML format but all these instructions can be easily adapted to other configuration formats.
A few definitions before we starts:
- Appenders are the various channels which you can use to output your logs. For example, you can use the RollingFileAppender to output logs in a file and rename the file when conditions (named Trigger Policies) are met, or the HttpAppender which sends logs via HTTP to a target URL.
- Layouts are defining how a log entry will be output. Log4J2 are offering different layouts, such as the Pattern Layout, which define a text-formatted log, or the JSON Layout that we will use.
Log4J2 documentation is very complete and I strongly recommend you to read it to see everything possible with Log4J2.
Configuring Log4J2
As mentioned earlier, Log4J2 configuration can be written in many languages. In Mule applications, the configuration uses the XML format and is located in src/main/resources/log4j2.xml
, the default path for Maven projects.
This files contains the various appenders configured for your application (including their layout) as well as the loggers configuration, which define the level of the various log categories of your application. This is an example of Mule' default Log4J2 configuration:
<Configuration>
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%-5p %d [%t] %c: %m%n"/>
</Console>
</Appenders>
<Loggers>
<!-- CXF is used heavily by Mule for web services -->
<AsyncLogger name="org.apache.cxf" level="WARN"/>
<!-- Apache Commons tend to make a lot of noise which can clutter the log-->
<AsyncLogger name="org.apache" level="WARN"/>
<!-- Reduce startup noise -->
<AsyncLogger name="org.springframework.beans.factory" level="WARN"/>
<!-- Mule classes -->
<AsyncLogger name="org.mule" level="INFO"/>
<AsyncLogger name="com.mulesoft" level="INFO"/>
<AsyncRoot level="INFO">
<AppenderRef ref="Console"/>
</AsyncRoot>
</Loggers>
</Configuration>
The JSON Layout
In the example above, the Console
appender is using the Pattern Layout, which outputs logs as text. We will replace this layout by a JSONLayout
to output our logs as JSON:
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<JSONLayout />
</Console>
</Appenders>
If you run your application and check the console, you will notice that your logs are now formatted in JSON. This JSON is probably taking a lot of lines, so we will configure our appender to output a more compact format, and add a few options to improve our log:
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true" includeTimeMillis="true" />
</Console>
</Appenders>
Let's detail these new options:
-
compact
will write the JSON log without writing any space or new line between keys. This will make it harder to read but will take much less space in the log file. -
eventEol
will append an end-of-line character (\n
) after each record, writing one log per line in your log file. -
properties
will add the thread context map in the generated JSON. The thread context map includes the correlation ID, the processor path and every variables added to the logging context using the Tracing Module. -
stacktraceAsString
will write stacktraces as a single string instead of nested objects. This will simplify integration in log management tools and will make it easier to read as each line will end with the new line caracter (\n
), allowing a JSON reader to properly format the stacktrace. -
includeTimeMillis
will output the log' time as a timestamp with millisecond precision instead of an instant object, allowing log management tools to properly set the time of a log entry.
These settings will ensure that our log takes the minimum amount of space when written while adding all the necessary informations in an easy to parse format.
Adding properties
When using the JSON layout, you can specify additional properties that will be added to your log payload. These properties might be used to add informations about the log that are otherwise unknown to Log4J2 such as the name or version of your application, the type of API it implements (experience
, process
or system
), or any information you might find useful.
Properties can be added as children of the JSONLayout
tag by using the KeyValuePair
tag:
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true" includeTimeMillis="true">
<KeyValuePair key="appName" value="my-app" />
<KeyValuePair key="version" value="1.0.0" />
<KeyValuePair key="appType" value="process" />
</JSONLayout>
Associated with Maven' resource filtering, it is even possible to extract these informations from the project' pom.xml. I will write another article on how to use Maven' filtering soon, but here is an example of what we use internally:
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true" includeTimeMillis="true">
<!-- Project properties -->
<KeyValuePair key="appName" value="${project.name}" />
<KeyValuePair key="version" value="${project.version}" />
<KeyValuePair key="appType" value="${app.type}" />
</JSONLayout>
Writing logs to a JSON file
Now that our logs are formatted in JSON and contains every information we need, we will configure Log4J2 so it outputs logs in a RollingFileAppender. To do this, we will replace our Console
appender by the RollingFileAppender
and configure it to write logs in a file named my-app.log.json
, under the logs folder of our runtime. Files will be rolled once the log file' size reach 10MB, and the last 10 files will be kept on the disk:
<Appenders>
<RollingFile name="json-file" fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}my-app.log.json"
filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}my-app-%i.log.json">
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true" includeTimeMillis="true">
<!-- Project properties -->
<KeyValuePair key="appName" value="${project.name}" />
<KeyValuePair key="version" value="${project.version}" />
<KeyValuePair key="appType" value="${app.type}" />
</JSONLayout>
<SizeBasedTriggeringPolicy size="10 MB" />
<DefaultRolloverStrategy max="10"/>
</RollingFile>
</Appenders>
With this appender, logs will be written in $MULE_HOME/logs/my-app.log.json
. When a new log is produced by your application, it will be added to the tail of the file. When your log file' size exceed 10 MB, it will be renamed as $MULE_HOME/logs/my-app-1.log.json
and a new file $MULE_HOME/logs/my-app.log.json
will be created. When the new file is full, $MULE_HOME/logs/my-app-1.log.json
is renamed to $MULE_HOME/logs/my-app-2.log.json
, $MULE_HOME/logs/my-app.log.json
to $MULE_HOME/logs/my-app-1.log.json
and a new $MULE_HOME/logs/my-app.log.json
is created.
The files will continue to roll until 10 of them exists, at which point the oldest file will be deleted. This is very convenient to keep the most recent files on the disk but avoiding filling your disk while keeping the most recent logs available for debug.
How to easily read JSON files?
If you open one of these JSON log files, you will notice that it is very hard to read. You could copy-paste their content into a JSON beautifier, but this is far from convenient and will make you loose a lot of time.
Instead, I recommend the jq
command. This command will allow you to format you JSON as well as filter them and choose the fields you want to see. It can be installed on any machine and makes it very easy to prettify your JSON file:
cat my-app.log.json | jq
Conclusion
Now that you logs are written in JSON log files, they become much easier to integrate into your log management platform. Data can easily be parsed by platforms that usually support JSON logs natively, mapping is much easier as each information is in its own field, and you can add new properties to your JSON logs with only a short line of XML.
And to show you the final result of our setup, this is a Log4J2 file that we use and which rely on JSON logging and Mule resource filtering to produce informative and easy to parse logs picked by Datadog agent:
<?xml version="1.0" encoding="utf-8"?>
<Configuration>
<Appenders>
<!-- Human readable logs for easy debugging via SSH -->
<RollingFile name="file" fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}${project.artifactId}.log"
filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}${project.artifactId}-%i.log">
<PatternLayout pattern="%-5p %d [%t] [%MDC] %c: %m%n" />
<SizeBasedTriggeringPolicy size="10 MB" />
<DefaultRolloverStrategy max="10"/>
</RollingFile>
<!-- JSON formatted logs consummed by our Datadog agent -->
<RollingFile name="json-file" fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}${project.artifactId}.log.json"
filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}${project.artifactId}-%i.log.json">
<JSONLayout compact="true" eventEol="true" properties="true" stacktraceAsString="true" includeTimeMillis="true">
<!-- Project properties-->
<KeyValuePair key="appName" value="${project.name}" />
<KeyValuePair key="version" value="${project.version}" />
<KeyValuePair key="appType" value="${app.type}" />
<!-- Datadog log ingestion properties -->
<KeyValuePair key="service" value="${project.artifactId}" />
</JSONLayout>
<SizeBasedTriggeringPolicy size="10 MB" />
<DefaultRolloverStrategy max="10"/>
</RollingFile>
</Appenders>
<Loggers>
<!-- Http Logger shows wire traffic on DEBUG. -->
<!--AsyncLogger name="org.mule.service.http.impl.service.HttpMessageLogger" level="DEBUG" />-->
<AsyncLogger name="org.mule.service.http" level="WARN"/>
<AsyncLogger name="org.mule.extension.http" level="WARN"/>
<!-- Mule logger -->
<AsyncLogger name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO"/>
<AsyncRoot level="INFO">
<AppenderRef ref="file" />
<AppenderRef ref="json-file" />
</AsyncRoot>
</Loggers>
</Configuration>
Top comments (3)
Great article! thank you so much for sharing
Thanks, how to persist to db with the json datatype?
You might want to look into Log4J' NoSQL appenders. There are probably other SQL appenders available (but none supported officially it seems), but I have never tried them.