This article was originally written by Ravgeet Dhillon on the Honeybadger Developer Blog.
When an application runs, it performs a tremendous number of tasks. A simple to-do app can have tons of tasks like - user logins, creating to-dos, updating to-dos, deleting to-dos, and duplicating to-dos. These tasks can result in success or may end up with some errors. Hence, there is a need to monitor events happening and analyze them to identify bottlenecks in the performance of the application. This is where logging is useful.
In this article, you'll learn how to create logs in a Python application using the Python logging module. Logging can help Python developers of all experience levels develop and analyze an application's performance more quickly.
What is Logging?
Logging is the process of storing the details about events that happen in an application. Logs provide an additional way to check out the flow that an application is growing through. It is the historical record of the state of the application.
For instance, if an application crashes, it becomes difficult to track the issue without any details of what happened before the crash. In such a case, logging provides a log trace, which can help developers detect the cause of the issue by going through the logs and recreating the actual scenario on their machine. With a properly set up logging system, you can detect the cause of errors to the accuracy level of the line number.
When and Why Should Developers Use Logging?
If you are a developer, you have probably used print
statements to debug your Python application. When you are trying to obtain information for debugging your application, you need much more information than you might think, such as timestamps, related modules, and types. You can use the print
command in a small application, but this strategy quickly becomes unwieldy in a large, complex application in which multiple modules are communicating and sharing data with each other.
To resolve this issue, you need a well-built logging module that can write all kinds of information related to your application to one of your output streams (e.g., a console or log file) in a structured and predictable manner.
Levels of Logging
In the Python logging module, there are five standard levels related to the severity of the events:
- Debug(10): Used mostly in non-production environments to diagnose issues or get specific types of information.
- Info(20): Used to output event information and create a trace of execution.
- Warning(30): Used when something unexpected happens that may cause problems in the future.
- Error(40): Used when a problem occurs that disrupts the normal functioning of the application.
- Critical(50): Used when one or more parts of the application fail to run.
Based on the severity of the event, you can create a log or even config the logger to only log events either below or above a particular severity level.
In the later sections of this article, you'll learn how to use these severity levels in the logging module.
Using Logging Module in Python
To understand the concept of logging in Python, create a project directory (python-logging
) by running the following commands in the terminal:
mdkir python-logging
cd python-logging
In the python-logging
directory, create a main.py
file and add the following code to it:
# 1
import logging
# 2
logging.debug('This is a debug message')
logging.info('This is a info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
In the above code, you are performing the following:
- You are importing Python's inbuilt
logging
module. - You are using the
logging
module helper functions to print the logs related to five severity levels on the console window.
Run the main.py
file by running the following command in your terminal:
python main.py
If you see the output in the terminal window, you'll see that logs related to only warning, error, and critical severity level were printed.
This is because, by default, the logging module only prints logs with a severity level equal to or greater than that of the warning.
Configuring the Severity Level
To override the default severity level, you can configure the logging module by using the basicConfig
function and passing the level
parameter to it. To do so, update your existing main.py
file to include the following:
import logging
logging.basicConfig(level=logging.DEBUG) # added
logging.debug('This is a debug message')
logging.info('This is a info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
In the above code, you are performing the following:
- You are using the
basicConfig
function to configure thelogging
module's settings. You have specified thelevel
parameter aslogging.DEBUG
, which configures thelogging
module to log all messages that have a severity level equal to or greater than the debug level.
Run the main.py
file by running the following command in the terminal:
python main.py
This time, you'll see all the logs printed on your terminal window.
Logs of all severity levels created by the Logging module.
Based on your requirements, you can configure the severity level of your application. For example, while developing the application, you can set the level
parameter to logging.DEBUG
, and in a production environment, you can set it to logging.INFO
.
Writing Logs to a Log File in Python
When you are developing an application, it is convenient to write the logs to the console. However, in a non-development environment, you might need a way to store the application logs persistently. For this purpose, you can store your logs in a log file. This log file can be accessed at any time and used to debug issues or extract all kinds of information related to the events happening in your Python application.
To create a log file, you can use filename
and filemode
parameters in the basicConfig
function.
Replace the contents of the main.py
file with the following code:
import logging
# 1
logging.basicConfig(filename='app.log', filemode='w', level=logging.DEBUG)
logging.warning('This is a warning message')
logging.debug('This is a debug message')
In the above code, you are performing the following:
- You are specifying the
filename
andfilemode
parameters in thebasicConfig
function. This configuration creates anapp.log
file at the root of your Python application and stores all the logs in it rather than printing them to the console window. Thefilemode
parameter should be in either write (w
) or append (a
) mode to give permission to Python to create and write log files. An important thing to note here is that the write mode will recreate the log file every time the application is run, and append mode will add the logs to the end of the log file. The append mode is the default mode.
Run the main.py
file by running the following command in the terminal:
python main.py
Next, check for the app.log
file at the root of your application. You'll see the following log messages:
Logs are stored in a log file.
Customizing the Log Message Format
You can log more information about the events happening in your Python application by using the format
parameter in the basicConfig
function. The format
parameter allows you to customize your log message format and include more information, if required, with the help of a huge list of attributes. For example, you can include a timestamp in your log message.
To do so, update the main.py
file by replacing its content with the following code:
import logging
# 1
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logging.info('This is a custom format log')
In the above code, keep the following in mind:
- The
format
parameter takes a string in which you can use LogRecord attributes in any arrangement you like. The%(asctime)s
attribute adds the current timestamp to the log message.
Run the above code by executing the following command in the terminal:
python main.py
In the terminal window, you'll see the log message with the timestamp information and a custom format:
A Detailed Example
Now that you have an idea of how logging works in Python, let's extend the examples to an actual Python program and see how logging fits in. In this example, you'll see how to use the logging.error
function in the try-except
block.
In the main.py
file, replace the existing code with the following:
import logging
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
# 1
a = 10
b = 'hello world'
# 2
try:
c = a / b
# 3
except Exception as e:
logging.error(e)
In the above code, you are performing the following:
- You are declaring two variables, an int
a
and a stringb
. - In the
try
block of thetry-expect
statement, you are performing a division operation between an int (a
) and a string (b
). As you probably already know, this is not a valid operation and will cause an exception. - In the
expect
block, you are catching the exception (e
) and logging it using thelogging.error
function.
Run the above code by executing the following command in the terminal:
python main.py
In the terminal window, you'll see a custom-formatted log message stating the unsupported operation error:
Third-Party Python Modules for Logging
Besides Python's inbuilt logging library, there are also third-party logging libraries that you can use in Python applications.
Logging in Django
Django is the most popular Python-based Web application development framework. Internally, Django uses the Python logging module but allows the developer to customize the log settings by configuring the LOGGING
config variable in the settings.py
file.
For example, the following configuration writes all log output to the debug.log
file at the root of the application:
LOGGING = {
'version': 1,
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'debug.log'
}
},
'loggers': {
'django': {
'handlers': [
'file'
],
'level': 'DEBUG'
}
}
}
This setting updates the logs generated by the Django Root Module, Django Server, Django Web Requests, and Django database queries. For more information on configuring logging in Django, check out Django’s documentation.
Logging in Flask
Flask, like Django, is also a Python-based Web application development framework, but it is smaller in size than Django. It also uses Python's built-in logging modules and provides an app.logger
function to configure log settings.
For example, to send logs to the console in the JSON format, you can use the following configuration in a Flask app before initializing the app
:
from logging.config import dictConfig
dictConfig({
'version': 1,
'formatters': {
'json': {
'()': 'pythonjsonlogger.jsonlogger.JsonFormatter'
}
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'json'
},
},
'root': {
'level': 'INFO',
'handlers': ['console']
}
})
app = Flask(__name__)
For more information regarding logging in Flask, you can check out Flask’s logging documentation.
Storing Logs in JSON Format
Instead of logging the events in a plain text format, you might want to output them as JSON objects. With the JSON format, you can easily add records to a database or query them using the JSON-based query language. For this use case, you can utilize the python-json-logger library to store logs in JSON format.
To install this package in your Python application, run the following command in the terminal:
pip install python-json-logger
The basic code to start using this library is as follows:
import logging
import json_log_formatter
formatter = json_log_formatter.JSONFormatter()
json_handler = logging.FileHandler(filename='log.json')
json_handler.setFormatter(formatter)
logger = logging.getLogger('my_json')
logger.addHandler(json_handler)
logger.error('An error occurred', extra={'type':'fatal'})
This package creates JSON logs, as shown in the following image:
Conclusion
It is very important to monitor events occurring in an application, and one of the most recommended ways is to use logging. Without keeping track of what's happening in your application and how the users are utilizing it, you will never be able to identify performance bottlenecks. There is a saying that you can only improve what you can measure. If you are unable to improve your system, you will lose customers over time. Therefore, it is recommended to use logging in your applications.
If you are looking for a robust, cloud-based system for real-time monitoring, error tracking, and exception-catching, you might love Honeybadger. You can use it with any framework or language, including Python, Ruby, JavaScript, and PHP.
Top comments (0)