I was reading through this article, 8 Key Performance Indicators For Quality Releases, and while it specifies some important modern test changes and needs... My reaction is that I don't want to be reviewing these metrics and don't want management using them to review test progress.
I think defect tracking and identifying patterns is very important, but this DDE smells of defend ourselves to management because they don't believe we are valuable. And I understand that is the point of KPI, something was sold and proof should be shown and maybe even billed to.
I am working to bring manual test effects together in order to provide consistency and automation across products. I understand this desire to show off the value (which will be missed because we didn't have tracking before). I'm just thinking off all the little things that get caught through adhoc testing. How this KPI creates a huge desire to spend time putting everything down, something I'm already being pressured for to prove testing.
My goal to measure success is to raise a teams confidence of release. Consider a team willing to ship to production once their automated pipeline is complete. They don't have metrics showing them that they found more bugs in development than occur in production, they aren't being arrogant and just assuming they'll deal with prod issues when they come.
This team will get that confidence by being involved, knowing what test efforts where taking place, knowing that production issues were reliably replicated in the pipeline. They'll be confident because they only changed this one thing and it wasn't a six week dev effort by three different developers.
Now this is a tall order when you have clients on a one month release cycle with a one to two month lead time. But progress towards the confidence is still the best indicator.
Top comments (0)