DEV Community

Discussion on: How do you measure code quality / engineering team performance?

Collapse
 
kspeakman profile image
Kasey Speakman

Ah, I appreciate your response. :) So the primary question I would have if tasked with this is Why? and what kind of actions is this data informing? I'm having trouble imagining how aggregate performance across a dozen teams will be actionable. (Like, what kind of department-wide policies could result from it?) But admittedly, my imagination is not very broad.

Best wishes!

Thread Thread
 
mshel profile image
MikhailShel

Oh I don't see it as policies in any way, it s more of a tool to show people that we are getting better/ worst and I do believe that engineers prefer to work in clean== better quality code, so having this score should reinforce the feeling of project moving in the right direction or reconsider coding practices / pr review system.

Thread Thread
 
kspeakman profile image
Kasey Speakman

My take on this is that the only way to ensure quality work is done is to hire people who actually care and can work well with others. These people are in short supply and/or are hard to find through current hiring practices. So, the next best option is to keep a pulse on how the product is doing.

  • Revenue
  • Support volume
  • Bugs filed
  • User errors (e.g. API request rejection)
  • Time spent on each screen

If a lot of people are calling support about specific processes, the natural instinct is to write documentation to explain. But probably a better strategy is to try to redesign it so it needs as little explanation as possible. A lot of managers fall into the trap of judging progress (or being evaluated) by amount of features produced. And they don't want to go back and change a feature that is already there. But sometimes it would make a bigger impact than adding something new.

But I digress. The other thing I wanted to mention is that even being aggregate metrics, if they are going to cause coding practice / PR review changes for example, people are going to game them. At the end of the day your ICs (even ones that genuinely care) will usually look at these as overhead (read: impediments) to doing their actual work. So their best interest is served by keeping code metrics looking good, regardless of whether they reflect any reality. And they will ultimately become a source of false confidence to management.

The only real way to know how the teams are doing is to have someone on the teams tell you. Traditional top-down mgmt structures create a separation that makes it hard for managers to actually be a part of the team and know how it's really doing. I think that's a big reason of the push for servant-leadership nowadays. Everybody appreciates the manager who is always there to lend a hand and keep them informed. But everybody would rather interact as little as possible with that manager who is pushing new initiatives on them.

I apologize, as I may have vented a little previously. Thank you for being a good sport and genuinely seeking answers and improvements -- for being someone who cares.