DEV Community

Gal Bashan
Gal Bashan

Posted on • Originally published at Medium on

Stop Using Dev Metrics!

You manage the company’s best team: the fastest cycle time, the highest planning accuracy, and the least amount of rework. Your team has the lowest review time, the most weekly deploys, and the best bugs / KLOC ratio. And then someone high up shuts down your project — what just happened?

The Output Paradox

As a junior developer, I was taught that merging small changes frequently is key to delivering value fast. As a junior engineering manager, I was told that tracking & optimizing metrics were vital to the manager’s job. The first google result pointed me to a bunch of terms and acronyms, and these were my first introductions to dev metrics.

Google’s featured snippet for “metrics for engineering manager.” This is what we teach engineering managers to track.

Dev metrics are essentially a set of metrics that help engineering teams track their velocity (move fast) and quality (without breaking things). Different metrics can be used to measure these attributes. Cycle time, PRs count, or closed issues track velocity. To track quality, monitor the defects count, mean-time-to-resolution, or the number of deployments that cause failure. Each of these metrics will provide different insights into the attribute you inspect. Some metrics also help with analyzing proxy attributes. Track rework to analyze efficiency, which impacts velocity. Planning accuracy is a good indicator of predictability, which enables both velocity and quality and has value on its own.

The problem with all the metrics above is that while they do an excellent job of measuring efficiency, they are usually not enough to measure success. The main reason is that they are entirely disconnected from the business KPIs (key performance indicators, yet another fancy name for metrics). This disconnection often leads to ineffective communication between organizations in the company, mainly product and engineering. After all, the product organization (and the rest of the business) doesn’t care for velocity or quality alone, as those track output. The product organization usually cares for outcome — what revenue did our new feature produce? How many new users did we attract?

Using dev metrics to measure project success, Illustration.

As a new engineering manager, this was baffling to me. On the one hand, it was clear that carrying on measuring my team based on dev metrics over business metrics would lead to misalignment with the product team. On the other hand, it seemed unfair to measure my team based on business metrics: How can they impact them? Is it our job? Will I be stepping on the product organization’s toes?

Measure What Matters

I took a leap of faith and tried using business metrics as my north star. As time passed, it led to better results.

First, it simplified communication. it’s easier for your product counterpart to understand why it’s essential to prioritize the DB upgrade when you put it in terms of shared KPI that will be harmed. Just because it’s clear to you that if you don’t prioritize the DB upgrade, the following few user stories will take twice as long, and you will have to spend time on maintenance, which will delay the feature release to a point where you won’t have enough time to hit the usage target you set as the KPI, doesn’t mean it’s clear to your product counterpart. I mean, that’s a lot of hops; why not start with the one that matters? (no, it’s not the DB upgrade one)

Product hearing the phrase “we need to refactor the API abstraction layer to improve DB query p99 performance.״

Secondly, measuring my team based on business metrics made me see that we can impact them quite a bit:

  • We can identify features where we can build 80% of the user value for 20% of the effort. This can free up resources to help move our KPIs even faster. To do that, the team must clearly understand the business goals, user needs, and the drive to improve them.
  • We can use the KPIs to guide what tech debt we should accumulate: Are we trying to get our first feature users, and feature-failure is an option? Let’s release as fast as possible and worry about tech debt later. Are we trying to improve the retention of a large customer base? Let’s ensure high quality for this one.
  • We can suggest solutions to pain points product didn’t even know solvable, based on a new technology advancement or an innovative engineering angle.
  • We can (as annoying as it may be) use the KPIs to understand that there is no point in pursuing a specific cool technological solution we want to implement because the problem itself is not crucial to the business.

Lastly, I realized that measuring my team and myself on anything other than the business’ KPIs sets us up for failure. It will drive us to build & ship many durable features fast, over shipping features customers use. I understood that as someone who builds the product, I am a part of the product team. Since the product success is my success, I should be calibrated to that. It is alignment at best — when engineering and product share the same goals, they will likely leverage each other’s strengths to achieve them.

How To Measure Success?

After switching to metrics that matter to the business, My product counterpart and I had to choose a framework to measure them. At Epsagon, we used the OKR framework. I won’t go into details as there are many great resources for implementing OKRs available online. The gist is that this is an excellent framework for mid to large organizations, usually in the post-product-market-fit phase. It helps align different teams, groups, and organizations within the business toward the same outcome-driven goals.

At Epsagon, we had OKRs for every product team, and they were the goals of the engineers, product, and design people who made the product team. Once implemented, we had a better collaboration between product and engineering. Less arguing over requirements and more cooperation to solve problems. We also noticed that the increase in autonomy caused by setting goals over tasks increased innovation and ownership over outcomes. The teams set the roadmap, and solutions came from engineering as well as product.

The downside of this methodology is that it requires some effort in implementation and also requires you to be able to state your goals over periods of months. This is usually hard for small start-ups, especially before product-market-fit. For earlier stages, it’s OK to work with one primary metric and still be driven by a business objective while remaining agile enough to change your goals and focus in the granularity of weeks or days.

Whether your team is big or small, have one mission or competing priorities — as long as you measure what matters, you have a higher chance of succeeding. The “how” is just an optimization.

Communication is Key

Once we establish business KPIs, the next step is to follow through. In my experience, there are two critical aspects to communicating and implementing this change. The first one is consistency: it’s not enough to set those metrics and revisit them three months later to find out we missed our marks. As leaders, it is our job to constantly monitor these metrics and ensure their state and roadmap to improve them are always communicated to our entire team.

The second aspect is commitment: it’s tempting to fall back to old habits of measuring individuals using dev metrics to optimize them. This will send the wrong message to your team, hinting that while you say you care about business KPIs, what you really care about are dev metrics. Be sure to change your habits to reflect your priorities: celebrate business outcomes, cut non-must-have features instead of pushing for deadlines, and raise flags when business KPIs are down. You can still use dev metrics as indicators; just remember to link them to their effect on business KPIs.

“We optimize for outcomes. Also, you didn’t meet your daily PR quota, so we’ll be cutting your bonus.”

Are Dev Metrics Deprecated?

Despite the click-baiting headline of this article, the answer is no — they should just be used correctly. For one, dev metrics are powerful tools for troubleshooting your business metrics. For example, a decline in user retention may be explained by an increase in defect count. In this case, you can use the dev metric as a leading indicator: Instead of waiting for the retention to drop, you should monitor the defect count to prevent it from dropping (similar to how you would monitor CPU usage to avoid 5xx API errors count from spiking and causing actual harm to you users). Another use case is dev metrics as a leading indicator for developer experience and satisfaction (which, in turn, impacts developer retention).

However, the vital thing to remember is that you should never optimize the dev metrics. You should always be optimizing business metrics and use dev metrics optimization only as a tool to get to that goal. And, of course, you should only set, track or optimize dev metrics goals after you have a clear sense of the business outcome you are trying to achieve and the business KPIs you are trying to hit. Great engineering stats won’t save your project if it has no value.

What’s Next?

If you are leading an engineering organization, switching to optimizing business outcomes over dev metrics will improve your team’s chances of creating a real impact for the business. If you are committed to becoming an outcome-driven leader, I urge you to ask yourself these questions:

  • Do I know what business outcomes are expected of me?
  • Am I doing whatever I can to optimize for these outcomes?
  • How am I tracking success? What are the criteria for raising a flag or changing my plans?
  • Is my team aware of the business goals and roadmap to achieve them? What is their input on it?
  • Which of my business KPIs can be improved by improving which dev metrics?

Hopefully, these will assist you with shipping features customers want to use!


Top comments (0)