Have you ever wanted to benchmark parts of your code but found that most libraries restrict you to the function scope level measuring (and nothing more granular)?
We did at my workplace. To solve the issue, we leveraged the power of Python contexts. Check out the following example to understand what I mean:
def example_func():
with t('metric1'):
time.sleep(0.5)
with t('metric2'):
time.sleep(0.3)
# ... later in the code
m = t.add_total('total').metrics
print(m)
prints:
{
'metric1': {'start': 1656844808.09, 'end': 1656844808.59, 'interval': 0.5},
'metric2': {'start': 1656844808.59, 'end': 1656844808.89, 'interval': 0.3},
'total': {'start': 1656844808.09, 'end': 1656844808.89, 'interval': 0.8}
}
Using Python contexts (the "with" statements), your benchmarking scope is not limited to whole functions (in this case, example_func
), but you can indent specific parts of your code you'd like to measure.
It is a short but clever piece of code I call Trainer ⏱ that is now available to anyone on my GitHub and can be installed as a package using pip. Feel free to leave your star as you might need this in the future.
Top comments (1)
Hi...do u have experienced in Machine learning programming?
I need to trained my dataset and get results over multiple supervised Machine learning algorithms.
Can u help me in the above context?