I manage engineers at a healthcare company. If you're unfamiliar, that's a highly regulated industry, high stakes and the kind of place where a bad deploy isn't just embarrassing but also impact directly operations with patients and medical experts.
Like any experienced developer, I track everything about our systems. Uptime, deploy frequency, incident response times, test coverage. I can tell you exactly how healthy our codebase is at any moment.
But last month, the company went under another big wave or "restructuring". The VP wanted to know "who on your team is actually growing right now, and who's quietly stalling?" in an effort to rebalance skill profiles and the remaining development teams.
I obviously couldn't tell him. I mean, who manages teams and can with certainty answer that question, even better, generate reports and scores for their VP?
What I had was feelings about it. I know my people. I have gut instincts. But data? Zero. I had more observability into our Kubernetes cluster than into the 12 people I'm responsible for developing.
That hit me hard.
The spreadsheet that broke me
I tried to fix it the obvious way. Built a spreadsheet. Columns for skills, rows for people, scores from 1-5. Classic engineering manager move.
It was useless within two weeks. Nobody updated it. The scores were just my opinion anyway. And my opinion has blind spots. I found out later that the person I thought was my weakest communicator was actually seen by peers as one of the clearest thinkers on the team. They were just quiet in meetings with me specifically. Why? Well that's a different story. 360 feedback is by design broken.
That gap between what I saw and what the team saw? That's the whole problem.
Three things I wasn't tracking that I should have been
Whether feedback actually changes anything. I give feedback regularly. But I never tracked if the same note kept coming up cycle after cycle. Turns out one person had been hearing "work on your system design skills" for over a year from multiple people. Nothing changed because nobody connected the dots. The feedback existed. The follow-through didn't.
Which goals quietly died. We had goals in our project tracker alongside hundreds of tickets. Some hadn't been touched in weeks. Not because people didn't care, but because something else took priority and nobody noticed. Stalled goals aren't a performance problem. They're a management visibility problem.
Who was slowly checking out. The signals are subtle. Fewer comments in design reviews. Goals going stale. Energy dropping in 1:1s. None of these are alarming on any single day. But when all three trend down for the same person over a month? Something's wrong. I missed it once. That person left. I don't want to miss it again.
What I did about it
I built a self-assessment. Not for my team really, but for me. Twelve questions across four areas: skill visibility, feedback cadence, goal health, and retention risk. Scored myself honestly.
58 out of 100.
The skill visibility section was brutal. 12 out of 25. I literally could not answer basic questions about my own team's capabilities with confidence.
I shared it with a few other eng managers I know. Most scored between 40 and 65. One person scored 28 and said "I think I need to rethink how I spend my 1:1s."
That reaction is why I turned it into a free tool anyone can take. Three minutes, no fluff, and you see where your blind spots are.
I'm curious what other engineering managers / team leads score. Genuinely. Not because I want your email, but truly because I want to know if this problem is as universal as I think it is, or if I was just uniquely bad at it.
Drop your score in the comments if you're brave enough.
I'm Johnny, engineering manager in Montreal, currently way too obsessed with the question of how managers actually develop people instead of just managing output. More at noor.guide.
Top comments (0)