DEV Community

Cover image for Java Logging Best Practices: 5 Essential Production Techniques for High-Performance Applications
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

Java Logging Best Practices: 5 Essential Production Techniques for High-Performance Applications

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Java Logging: Essential Production Practices

Logging transforms operational chaos into clarity. In production, it's our diagnostic lifeline—revealing hidden failures and performance bottlenecks. I've witnessed teams spend hours reproducing issues that proper logs would solve instantly. These five practices prevent such frustrations.

Centralized configuration saves headaches. Externalizing settings prevents log format conflicts across services. I prefer Logback or Log4j2 for their flexibility. Consider this logback.xml snippet enforcing JSON formatting and rotation policies:

// Unified configuration for microservices
<configuration scan="true" scanPeriod="30 seconds">
  <appender name="CLOUD" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOG_PATH}/application.log</file>
    <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
        <pattern>
          <pattern>{"time":"%date{ISO8601}", "service":"${APP_NAME}"}</pattern>
        </pattern>
        <mdc/>
        <message/>
      </providers>
    </encoder>
    <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
      <fileNamePattern>${LOG_PATH}/application.%d{yyyy-MM}.%i.log.gz</fileNamePattern>
      <maxFileSize>100MB</maxFileSize>
      <maxHistory>60</maxHistory>
    </rollingPolicy>
  </appender>
  <root level="INFO">
    <appender-ref ref="CLOUD"/>
  </root>
</configuration>
Enter fullscreen mode Exit fullscreen mode

This auto-reloads every 30 seconds and compresses old logs. Without this consistency, debugging distributed systems becomes needle-in-haystack work.

Structured logging is non-negotiable for automation. When services emit JSON like this:

{
  "timestamp": "2023-11-05T14:23:01Z",
  "level": "ERROR",
  "message": "Payment processing failed",
  "transactionId": "txn-7d83f2",
  "userId": "usr-901a",
  "service": "billing-service"
}
Enter fullscreen mode Exit fullscreen mode

We immediately query logs for transactionId="txn-7d83f2" across services. Contrast this with searching fragmented text logs during outages. Here’s how to implement it in code:

// Using SLF4J with Logstash encoder
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import net.logstash.logback.argument.StructuredArguments;

public class PaymentProcessor {
    private static final Logger logger = LoggerFactory.getLogger(PaymentProcessor.class);

    public void processPayment(String userId, String transactionId) {
        try {
            // Business logic
        } catch (Exception ex) {
            logger.error("Payment failed for user {}: {}", 
                StructuredArguments.keyValue("userId", userId),
                StructuredArguments.keyValue("transactionId", transactionId),
                ex);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Contextual enrichment turns logs into narratives. OpenTelemetry automatically injects trace IDs, but we can extend this. Imagine tracking a user’s journey through checkout:

// Enriching logs with MDC (Mapped Diagnostic Context)
import org.slf4j.MDC;

public class OrderService {
    public void createOrder(String sessionId, String userId) {
        MDC.put("sessionId", sessionId);
        MDC.put("userId", userId);

        logger.info("Order started");
        // Processing logic
        logger.info("Inventory reserved");

        MDC.clear(); // Critical to avoid leaks!
    }
}
Enter fullscreen mode Exit fullscreen mode

This stamps every log within createOrder with session/user IDs. Forgetting MDC.clear() once caused our team cross-contaminated logs for hours.

Intelligent level filtering prevents alert fatigue. My production configuration:

// Logback threshold control with environment awareness
<springProfile name="prod">
    <root level="INFO">
        <appender-ref ref="SYSLOG"/>
    </root>
    <logger name="com.company.sensitive" level="WARN"/>
</springProfile>

<springProfile name="debug">
    <root level="DEBUG">
        <appender-ref ref="CONSOLE"/>
    </root>
</springProfile>
Enter fullscreen mode Exit fullscreen mode

We route DEBUG logs to console during troubleshooting but keep production at INFO. Crucially, we suppress noisy third-party libs:

<!-- Silence chatty libraries -->
<logger name="org.hibernate" level="WARN"/>
<logger name="org.apache.kafka" level="ERROR"/>
Enter fullscreen mode Exit fullscreen mode

Asynchronous appenders maintain throughput. Blocking on disk I/O during traffic spikes causes cascading failures. Here’s a zero-blocking setup:

// Log4j2 async configuration
<Configuration>
  <Appenders>
    <File name="AsyncFile" fileName="app.log">
      <PatternLayout pattern="%d %p %c{1.} [%t] %m%n"/>
    </File>
    <Async name="Async" bufferSize="8192">
      <AppenderRef ref="AsyncFile"/>
    </Async>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Async"/>
    </Root>
  </Loggers>
</Configuration>
Enter fullscreen mode Exit fullscreen mode

The bufferSize="8192" holds logs in memory during bursts. We pair this with monitoring for buffer overflows—once we lost logs when the buffer capped during a denial-of-service attack.

Personal Insight: I once debugged a memory leak where the culprit was hidden in DEBUG logs. We temporarily enabled debugging in production with level-based filters, found the leaky connection pool, then reverted. This surgical approach avoided performance hits.

These practices build observability that scales. Centralized configs enforce standards, structured logs enable automation, context traces workflows, filtering reduces noise, and async writes protect performance. Implement them incrementally—start with JSON formatting and build from there. Your future self during a 3 AM outage will thank you.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)