If you're using ELK and java you're probably using elastic beat to listen to log files in your servers, parsing and sending to your log storage, elasticsearch. However since your app is generating the log, it can format it to the expected Kibana json document and reduce the complexity of your log processing pipeline.
The LoggingEventAsyncDisruptorAppender is able to collect logs to the a Ring Buffer and send them asynchronously to a Redis server where it will sit for a few seconds until the logstash workers can ship them to the elasticsearch server. In the example, an app using Spring Boot is tagging the logs with the correspondent spring.profiles.active environment, prod, dev, etc. You can include other fields exposed and detailed in the docs.
Logback logstash encoder: https://github.com/logstash/logstash-logback-encoder
<appender name="REDIS_APPENDER" class="net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender">
<ringBufferSize>131072</ringBufferSize>
<appender class="de.idealo.logback.appender.RedisBatchAppender">
<connectionConfig>
<scheme>NODE</scheme>
<host>redis.host</host>
<port>6379</port>
<password>password</password>
<key>your_app_logs</key>
</connectionConfig>
<maxBatchMessages>1000</maxBatchMessages>
<maxBatchSeconds>10</maxBatchSeconds>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<mdc/>
<pattern>
<pattern>
{
"timestamp": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSZZ}",
"message": "%message",
"logger": "%logger",
"thread": "%thread",
"level": "%level",
"host": "${HOSTNAME}",
"env": "${spring.profiles.active}",
"app": "front"
}
</pattern>
</pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>4096</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</appender>
</appender>
Top comments (0)