DEV Community

Michael Bogan for Heroku

Posted on

Working with Heroku Logplex for Comprehensive Application Logging

With the complexity of modern software applications, one of the biggest challenges for developers is simply understanding how their applications behave. Understanding the behavior of your app is key to maintaining its stability, performance, and security.

This is a big reason why we do application logging: to capture and record events through an application’s lifecycle, so that we can gain valuable insights into our application. What kinds of insights? Application activity (user interactions, system events, and so on), errors and exceptions, resource usage, potential security threats, and more.

When developers can capture and analyze these logs effectively, this improves application stability and security, which, in turn, improves the user experience. It’s a win-win for everybody.

Application logging is easy—if you have the right tools. In this post, we’ll walk through using Heroku Logplex as a centralized logging solution. We’ll start by deploying a simple Python application to Heroku. Then, we’ll explore the different ways to use Logplex to view and filter our logs. Finally, we’ll show how to use Logplex to send your logs to an external service for further analysis.

Ready to dive in? Let’s start with a brief introduction to Heroku Logplex.

Introducing Heroku Logplex

Heroku Logplex is a central hub that collects, aggregates, and routes log messages from various sources across your Heroku applications. Those sources include:

  • Dyno logs: generated by your application running on Heroku dynos.
  • Heroku logs: generated by Heroku itself, such as platform events and deployments.
  • Custom sources: generated by external sources, such as databases or third-party services.

By consolidating logs in a single, central place, Logplex simplifies log management and analysis. You can find all your logs in one place for simplified monitoring and troubleshooting. You can perform powerful filtering and searching on your logs. And you can even route logs to different destinations for further processing and analysis.

Core components

At its heart, Heroku Logplex consists of three crucial components that work together to streamline application logging:

  1. Log sources are the starting points where log messages originate within your Heroku environment. They are your dyno logs, Heroku logs, and custom sources which we mentioned above.
  2. Log drains are the designated destinations for your log messages. Logplex allows you to configure drains to route your logs to various endpoints for further processing. Popular options for log drains include:
  • External logging services with advanced log management features, dashboards, and alerting capabilities. Examples are Datadog, Papertrail, and Sumo Logic.
  • Notification systems that send alerts or notifications based on specific log entries, enabling real-time monitoring and troubleshooting.
  • Custom destinations such as your own Syslog or web server.
  • Log filters are powerful tools that act as checkpoints, allowing you to refine the log messages before they reach their final destinations. Logplex allows you to filter logs based on source, log level, and even message content. By using filters, you can significantly reduce the volume of data sent to your drains, focusing only on the most relevant log entries for that specific destination.

Routing and processing

As Logplex collects log messages from all your defined sources, it passes these messages through your configured filters, potentially discarding entries that don't match the criteria. Finally, filtered messages are routed to their designated log drains for further processing or storage.

Alright, enough talk. Show me how, already!

Integrating Logplex with Your Application

Let’s walk through how to use Logplex for a simple Python application. To get started, make sure you have a Heroku account. Then, download and install the Heroku CLI.

Demo application

You can find our very simple Python script (main.py) in the GitHub repo for this demo. Our script runs an endless integer counter, starting from zero. With each iteration, it emits a log message (cycling through log levels INFO, DEBUG, ERROR, and WARN). Whenever it detects a prime number, it emits an additional CRITICAL log event to let us know. We use isprime from the sympy library to help us determine if a number is prime.

To run this Python application on your local machine, first clone the repository. Then, install the dependencies:

(venv) ~/project$ pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Next, start up the Python application. We use gunicorn to spin up a server that binds to a port, while our prime number logging continues to run in the background. (We do this because a Heroku deployment is designed to bind to a port, so that’s how we’ve written our application even though we’re focused on logging).

(venv) ~/project$ gunicorn -w 1 --bind localhost:8000 main:app
[2024-03-25 23:18:59 -0700] [785441] [INFO] Starting gunicorn 21.2.0
[2024-03-25 23:18:59 -0700] [785441] [INFO] Listening at: http://127.0.0.1:8000 (785441)
[2024-03-25 23:18:59 -0700] [785441] [INFO] Using worker: sync
[2024-03-25 23:18:59 -0700] [785443] [INFO] Booting worker with pid: 785443
{"timestamp": "2024-03-25T23:18:59.507828Z", "level": "INFO", "name": "root", "message": "New number", "Number": 0}
{"timestamp": "2024-03-25T23:19:00.509182Z", "level": "DEBUG", "name": "root", "message": "New number", "Number": 1}
{"timestamp": "2024-03-25T23:19:01.510634Z", "level": "ERROR", "name": "root", "message": "New number", "Number": 2}
{"timestamp": "2024-03-25T23:19:02.512100Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 2}
{"timestamp": "2024-03-25T23:19:05.515133Z", "level": "WARNING", "name": "root", "message": "New number", "Number": 3}
{"timestamp": "2024-03-25T23:19:06.516567Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "Prime Number": 3}
{"timestamp": "2024-03-25T23:19:09.519082Z", "level": "INFO", "name": "root", "message": "New number", "Number": 4}
Enter fullscreen mode Exit fullscreen mode

Simple enough. Now, let’s get ready to deploy it and work with logs.

Create the app

We start by logging into Heroku through the CLI.

$ heroku login
Enter fullscreen mode Exit fullscreen mode

Then, we create a new Heroku app. I’ve named my app logging-primes-in-python, but you can name yours whatever you’d like.

$ heroku apps:create logging-primes-in-python
Creating ⬢ logging-primes-in-python... done
https://logging-primes-in-python-6140bfd3c044.herokuapp.com/ | https://git.heroku.com/logging-primes-in-python.git
Enter fullscreen mode Exit fullscreen mode

Next, we create a Heroku remote for our GitHub repo with this Python application.

$ heroku git:remote -a logging-primes-in-python
set git remote heroku to https://git.heroku.com/logging-primes-in-python.git
Enter fullscreen mode Exit fullscreen mode

A note on requirements.txt and Procfile

We need to let Heroku know what dependencies our Python application needs, and also how it should start up our application. To do this, our repository has two files: requirements.txt and Procfile.

The first file, requirements.txt, looks like this:

python-json-logger==2.0.4
pytest==8.0.2
sympy==1.12
gunicorn==21.2.0

Enter fullscreen mode Exit fullscreen mode

And Procfile looks like this:

web: gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app
Enter fullscreen mode Exit fullscreen mode

That’s it. Our entire repository has these files:

$ tree
.
├── main.py
├── Procfile
└── requirements.txt

0 directories, 3 files
Enter fullscreen mode Exit fullscreen mode

Deploy the code

Now, we’re ready to deploy our code. We run this command:

$ git push heroku main
…
remote: Building source:
remote: 
remote: -----> Building on the Heroku-22 stack
remote: -----> Determining which buildpack to use for this app
remote: -----> Python app detected
…
remote: -----> Installing requirements with pip
…
remote: -----> Launching...
remote:        Released v3
remote:        https://logging-primes-in-python-6140bfd3c044.herokuapp.com/ deployed to Heroku
remote: 
remote: Verifying deploy... done.
Enter fullscreen mode Exit fullscreen mode

Verify the app is running

To verify that everything works as expected, we can dive into Logplex right away. Logplex is enabled by default for all Heroku applications.

$ heroku logs --tail -a logging-primes-in-python
…
2024-03-22T04:34:15.540260+00:00 heroku[web.1]: Starting process with command `gunicorn -w 1 --bind 0.0.0.0:${PORT} main:app`
…
2024-03-22T04:34:16.425619+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:16.425552Z", "level": "INFO", "name": "root", "message": "New number", "taskName": null, "Number": 0}
2024-03-22T04:34:17.425987+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:17.425837Z", "level": "DEBUG", "name": "root", "message": "New number", "taskName": null, "Number": 1}
2024-03-22T04:34:18.000000+00:00 app[api]: Build succeeded
2024-03-22T04:34:18.426354+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:18.426205Z", "level": "ERROR", "name": "root", "message": "New number", "taskName": null, "Number": 2}
2024-03-22T04:34:19.426700+00:00 app[web.1]: {"timestamp": "2024-03-22T04:34:19.426534Z", "level": "CRITICAL", "name": "root", "message": "Prime found!", "taskName": null, "Prime Number": 2}
Enter fullscreen mode Exit fullscreen mode

We can see that logs are already being written. Heroku’s log format is following this scheme:

timestamp source[dyno]: message
Enter fullscreen mode Exit fullscreen mode
  • Timestamp: The date and time recorded at the time the dyno or component produced the log line. The timestamp is in the format specified by RFC5424 and includes microsecond precision.
  • Source: All of your app’s dynos (web dynos, background workers, cron) have the source app. Meanwhile, all of Heroku’s system components (HTTP router, dyno manager) have the source heroku.
  • Dyno: The name of the dyno or component that wrote the log line. For example, web dyno #1 appears as web.1, and the Heroku HTTP router appears as router.
  • Message: The content of the log line. Logplex splits any lines generated by dynos that exceed 10,000 bytes into 10,000-byte chunks without extra trailing newlines. It submits each chunk that is submitted as a separate log line.

View and filter logs

We’ve seen the first option for examining our logs, the Heroku CLI. You can use command line arguments, such as --source and --dyno, to use filters and specify which logs to view.

To specify the number of (most recent) log entries to view, do this:

$ heroku logs --num 10
Enter fullscreen mode Exit fullscreen mode

To filter down logs to a specific dyno or source, do this:

$ heroku logs --dyno web.1
$ heroku logs --source app
Enter fullscreen mode Exit fullscreen mode

Of course, you can combine these filters, too:

$ heroku logs --source app --dyno web.1
Enter fullscreen mode Exit fullscreen mode

The Heroku Dashboard is another place where you can look at your logs. On your app page, click More -> View logs.

Image description

Here is what we see:

Image description

If you look closely, you’ll see different sources: heroku and app.

Log Drains

Let’s demonstrate how to use a log drain. For this, we’ll use BetterStack (formerly Logtail). We created a free account. After logging in, we navigated to the Source page, and clicked Connect source.

Image description

We enter a name for our source and select Heroku as the source platform. Then, we click Create source.

Image description

After creating our source, BetterStack provides the Heroku CLI command we would use to add a log drain for sending logs to BetterStack.

Image description

Technically, this command adds an HTTPS drain that points to an HTTPS endpoint from BetterStack. We run the command in our terminal, and then we restart our application:

$ heroku drains:add \
"https://in.logs.betterstack.com:6515/events?source_token=YKGWLN7****************" \
-a logging-primes-in-python


Successfully added drain https://in.logs.betterstack.com:6515/events?source_token=YKGWLN7*****************

$ heroku restart -a logging-primes-in-python
Enter fullscreen mode Exit fullscreen mode

Almost instantly, we begin to see our Heroku logs appear on the Live tail page at BetterStack.

Image description

By using a log drain to send our logs from Heroku Logplex to an external service, we can take advantage of the features from BetterStack to work with our Heroku logs. For example, we can create visualization charts and configure alerts on certain log events.

Custom drains

In our example above, we created a custom HTTPS log drain that happened to point to an endpoint from BetterStack. However, we can send our logs to any endpoint we want. We could even send our logs to another Heroku app! 🤯 Imagine building a web service on Heroku that only Heroku Logplex can make POST requests to.

Logging best practices

Before we conclude our walkthrough, let’s briefly touch on some logging best practices.

  1. Focus on relevant events: Log only the information that’s necessary to understand and troubleshoot your application's behavior. Prioritize logging application errors, user actions, data changes, and other crucial activities.
  2. Enrich logs with context: Include details that provide helpful context to logged events. Your future troubleshooting self will thank you. So, instead of just logging "User logged in," capture details like user ID, device information, and relevant data associated with the login event.
  3. Embrace structured logging: Use a standardized format like JSON to make your logs machine-readable. This allows easier parsing and analysis by logging tools, saving you time in analysis.
  4. Protect sensitive data: Never log anything that could compromise user privacy or violate data regulations. This includes passwords, credit card information, or other confidential data.
  5. Take advantage of log levels: Use different log levels (like DEBUG, INFO, WARNING, and ERROR) to categorize log events based on their severity. This helps with issue prioritization, allowing you to focus on critical events requiring immediate attention.

Conclusion

Heroku Logplex empowers developers and operations teams with a centralized and efficient solution for application logging within the Heroku environment. While our goal in this article was to provide a basic foundation for understanding Heroku Logplex, remember that the platform offers a vast array of advanced features to explore and customize your logging based on your specific needs.
As you dig deeper into Heroku’s documentation, you’ll come across advanced functionalities like:

  • Customizable log processing: Leverage plugins and filters to tailor log processing workflows for specific use cases.
  • Real-time alerting: Configure alerts based on log patterns or events to proactively address potential issues.
  • Advanced log analysis tools: Integrate with external log management services for comprehensive log analysis, visualization, and anomaly detection. By understanding the core functionalities and exploring the potential of advanced features, you can leverage Heroku Logplex to create a robust and efficient logging strategy. Ultimately, good logging will go a long way in enhancing the reliability, performance, and security of your Heroku applications.

Top comments (0)