Source Code Link
aeeeeeep / objwatch
🗳️ ObjWatch is a Python library to trace and monitor object attributes and method calls.
ObjWatch
[ English | 中文 ]
Overview
ObjWatch is a robust Python library designed to streamline the debugging and monitoring of complex projects. By offering real-time tracing of object attributes and method calls, ObjWatch empowers developers to gain deeper insights into their codebases, facilitating issue identification, performance optimization, and overall code quality enhancement.
ObjWatch may impact your application's performance. It is recommended to use it solely in debugging environments.
Features
-
Nested Structure Tracing: Visualize and monitor nested function calls and object interactions with clear, hierarchical logging.
-
Enhanced Logging Support: Leverage Python's built-in
logging
module for structured, customizable log outputs, including support for simple and detailed formats. Additionally, to ensure logs are captured even if the logger is disabled or removed by external libraries, you can setlevel="force"
. Whenlevel
is set to"force"
, ObjWatch bypasses the standard logging handlers and usesprint()
to…
Current Debugging Pain Points
When reading and debugging complex projects, it's common to encounter nested calls with up to a dozen layers, making it difficult to determine the order of execution. The most frustrating aspect is debugging in a multi-process environment; debugging a single process often causes other processes to wait and time out, requiring constant restarting of the debugging program. Using print statements frequently results in missed function calls, which is time-consuming and laborious. Currently, there hasn't been a debugging library that combines simplicity and comprehensiveness, so I spent a weekend developing a tool that addresses this pain point.
What is ObjWatch?
ObjWatch is designed specifically to simplify debugging and monitoring of complex projects. It provides real-time tracking of object properties and method calls, and allows for custom hooks to help developers gain deeper insights into the codebase.
Quick Usage Example
You can install it directly using pip install objwatch
. For demonstration purposes, you need to clone the source code:
git clone https://github.com/aeeeeeep/objwatch
cd objwatch
pip install .
python3 examples/example_usage.py
Executing the above code will produce the following call information:
[2025-01-04 19:15:13] [DEBUG] objwatch: Processed targets:
>>>>>>>>>>
examples/example_usage.py
<<<<<<<<<<
[2025-01-04 19:15:13] [WARNING] objwatch: wrapper 'BaseLogger' loaded
[2025-01-04 19:15:13] [INFO] objwatch: Starting ObjWatch tracing.
[2025-01-04 19:15:13] [INFO] objwatch: Starting tracing.
[2025-01-04 19:15:13] [DEBUG] objwatch: run main <-
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.__init__ <- '0':(type)SampleClass, '1':10
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.__init__ -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.increment <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value None -> 10
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 10 -> 11
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.increment -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.increment <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 11 -> 12
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.increment -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.increment <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 12 -> 13
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.increment -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.increment <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 13 -> 14
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.increment -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.increment <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 14 -> 15
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.increment -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.decrement <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 15 -> 14
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.decrement -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.decrement <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 14 -> 13
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.decrement -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: | run SampleClass.decrement <- '0':(type)SampleClass
[2025-01-04 19:15:13] [DEBUG] objwatch: | | upd SampleClass.value 13 -> 12
[2025-01-04 19:15:13] [DEBUG] objwatch: | end SampleClass.decrement -> None
[2025-01-04 19:15:13] [DEBUG] objwatch: end main -> None
[2025-01-04 19:15:13] [INFO] objwatch: Stopping ObjWatch tracing.
[2025-01-04 19:15:13] [INFO] objwatch: Stopping tracing.
The most crucial part of the code is the following:
# Using as a Context Manager with Detailed Logging
with objwatch.ObjWatch(['examples/example_usage.py']):
main()
# Using the API with Simple Logging
obj_watch = objwatch.watch(['examples/example_usage.py'])
main()
obj_watch.stop()
We can use the tool both through a context manager and via API calls. In the example, we specify tracking for the examples/example_usage.py
file, meaning that any function, method, or variable within examples/example_usage.py
will be logged by the tool. This clear, hierarchical logging helps visualize and monitor nested function calls and object interactions. The printed logs include the following types of execution:
-
run
: Indicates the start of a function or class method execution. -
end
: Signifies the end of a function or class method execution. -
upd
: Represents the creation of a new variable. -
apd
: Denotes the addition of elements to data structures like lists, sets, or dictionaries. -
pop
: Marks the removal of elements from data structures like lists, sets, or dictionaries.
The example is relatively simple, but this functionality will be extremely useful for executing large-scale projects.
Overall Features
ObjWatch provides the following interfaces:
-
targets
(list): Files or modules to monitor. -
exclude_targets
(list, optional): Files or modules to exclude from monitoring. -
ranks
(list, optional): GPU ranks to track when usingtorch.distributed
. -
output
(str, optional): Path to a file for writing logs. -
output_xml
(str, optional): Path to the XML file for writing structured logs. If specified, tracing information will be saved in a nested XML format for easy browsing and analysis. -
level
(str, optional): Logging level (e.g.,logging.DEBUG
,logging.INFO
,force
etc.). -
simple
(bool, optional): Enable simple logging mode with the format"DEBUG: {msg}"
. -
wrapper
(FunctionWrapper, optional): Custom wrapper to extend tracing and logging functionality. -
with_locals
(bool, optional): Enable tracing and logging of local variables within functions during their execution. -
with_module_path
(bool, optional): Control whether to prepend the module path to function names in logs.
Key Feature: Custom Wrapper Extensions
ObjWatch provides the FunctionWrapper
abstract base class, allowing users to create custom wrappers to extend and customize the library's tracking and logging functionality. By inheriting from FunctionWrapper
, developers can implement customized behaviors tailored to specific project requirements. These behaviors will be executed during function calls and returns, providing more professional monitoring.
FunctionWrapper Class
The FunctionWrapper
class defines two core methods that must be implemented:
-
wrap_call(self, func_name: str, frame: FrameType) -> str
:
This method is invoked at the beginning of a function call. It receives the function name and the current frame object, which contains the execution context, including local variables and the call stack. Implement this method to extract, log, or modify information before the function executes.
-
wrap_return(self, func_name: str, result: Any) -> str
:
This method is called upon a function's return. It receives the function name and the result returned by the function. Use this method to log, analyze, or alter information after the function has completed execution.
-
wrap_upd(self, old_value: Any, current_value: Any) -> Tuple[str, str]
:
This method is triggered when a variable is updated, receiving the old value and the current value. It can be used to log changes to variables, allowing for the tracking and debugging of variable state transitions.
For more details on frame objects, refer to the official Python documentation.
TensorShapeLogger
This is an example of a custom wrapper I implemented based on my usage scenario. The code is in the objwatch/wrappers.py
file. This wrapper automatically records the tensor shapes of inputs and outputs in all function method calls within the specified module, as well as the states of variables. This is extremely useful for understanding the execution logic of complex distributed frameworks.
class TensorShapeLogger(FunctionWrapper):
"""
TensorShapeLogger extends FunctionWrapper to log the shapes of torch.Tensor objects.
"""
@staticmethod
def _process_tensor_item(seq: List[Any]) -> Optional[List[Any]]:
"""
Process a sequence to extract tensor shapes if all items are torch.Tensor.
Args:
seq (List[Any]): The sequence to process.
Returns:
Optional[List[Any]]: List of tensor shapes or None if not applicable.
"""
if torch is not None and all(isinstance(x, torch.Tensor) for x in seq):
return [x.shape for x in seq]
else:
return None
def wrap_call(self, func_name: str, frame: FrameType) -> str:
"""
Format the function call information, including tensor shapes if applicable.
Args:
func_name (str): Name of the function being called.
frame (FrameType): The current stack frame.
Returns:
str: Formatted call message.
"""
args, kwargs = self._extract_args_kwargs(frame)
call_msg = self._format_args_kwargs(args, kwargs)
return call_msg
def wrap_return(self, func_name: str, result: Any) -> str:
"""
Format the function return information, including tensor shapes if applicable.
Args:
func_name (str): Name of the function returning.
result (Any): The result returned by the function.
Returns:
str: Formatted return message.
"""
return_msg = self._format_return(result)
return return_msg
def wrap_upd(self, old_value: Any, current_value: Any) -> Tuple[str, str]:
"""
Format the update information of a variable, including tensor shapes if applicable.
Args:
old_value (Any): The old value of the variable.
current_value (Any): The new value of the variable.
Returns:
Tuple[str, str]: Formatted old and new values.
"""
old_msg = self._format_value(old_value)
current_msg = self._format_value(current_value)
return old_msg, current_msg
def _format_value(self, value: Any, is_return: bool = False) -> str:
"""
Format a value into a string, logging tensor shapes if applicable.
Args:
value (Any): The value to format.
is_return (bool): Flag indicating if the value is a return value.
Returns:
str: Formatted value string.
"""
if torch is not None and isinstance(value, torch.Tensor):
formatted = f"{value.shape}"
elif isinstance(value, log_element_types):
formatted = f"{value}"
elif isinstance(value, log_sequence_types):
formatted_sequence = EventHandls.format_sequence(value, func=TensorShapeLogger._process_tensor_item)
if formatted_sequence:
formatted = f"{formatted_sequence}"
else:
formatted = f"(type){value.__class__.__name__}"
else:
formatted = f"(type){value.__class__.__name__}"
if is_return:
if isinstance(value, torch.Tensor):
return f"{value.shape}"
elif isinstance(value, log_sequence_types) and formatted:
return f"[{formatted}]"
return f"{formatted}"
return formatted
In deep learning projects, the shape and dimensions of tensors are crucial. A small dimension error can prevent the entire model from training or predicting correctly. Manually checking each tensor's shape is tedious and error-prone. The TensorShapeLogger
automates the recording of tensor shapes, helping developers to:
- Quickly identify dimension mismatch issues: Automatically records shape information to promptly detect and fix dimension errors.
- Optimize model architecture: By tracking the changes in tensor shapes, optimize the network structure to improve model performance.
- Increase debugging efficiency: Reduce the time spent manually checking tensor shapes, allowing focus on core model development.
Example of Using a Custom Wrapper
It is recommended to refer to the tests/test_torch_train.py
file. This file contains a complete example of a PyTorch training process, demonstrating how to integrate ObjWatch for monitoring and logging.
Notes
⚠️ Performance Warning
ObjWatch can impact the performance of your program when used in a debugging environment. Therefore, it is recommended to use it only during the debugging and development phases.
This is just an initial write-up; I plan to add more over time. If you find it useful, feel free to give it a star.
The library is still actively being updated. If you have any questions or suggestions, please leave a comment or open an issue in the repository.
Top comments (0)