Hi Everyone... I am a DevOps & QA Engineer and I have been working on performance testing, as of late.
While using Locust, I kept on thinking how nice it would have been if I could just perhaps input a CSV file with all things such as concurrent users, ramp-up and duration because in comparison and to this point, I had to key in all these individual items after each completion of the Load-Test.
To combat this, I created a custom Locust Orchestrator to execute all of my test-plans in the background and let the reports get saved, for later reviews.
Locust Orchestrator
GitHub: https://github.com/dev-vaayen/locust-orchestrator
It's a plain Python CLI, which allows us to specify a complete load test plan and run one after another with reports saved locally without any manual intervention in-between.
What My Project Does:
- Performs several load tests based on a CSV file
- Prepares an HTML report on each test
- Logs when each test starts
The plan.csv that is fed into the orchestrator works like this:
users, spawn_rate, duration
100, 10, 2m
500, 50, 5m
200, 20, 3m
This tool pretty much builds on-top of Locust and runs Locust under the hood, meaning that the usual locust related flags can also be used.
I have been using this custom-orchestrator to execute my own test plans in the background and for viewing the results at a later stage rather than monitoring each run live and manually inputting Test-Configs.
Future plans currently include:
- Combined reports in html
- History tracking using sqlite db
- Maybe a small dashboard using streamlit
I would also like to understand whether this tool can be beneficial to other people as well and what other ideas that you might like seeing implemented in here. In my head maybe it goes without saying but still, please do consider contributing your own ideas as well by visiting the GitHub repository for this at https://github.com/dev-vaayen/locust-orchestrator
Top comments (0)