DEV Community

Discussion on: Are you any good at TDD?

Collapse
 
nutterzuk profile image
Stephen Nutbrown

"DRY, SOLID, YAGNI, etc. Very important metric" -> are these metrics? How do you measure them?

Collapse
 
miniscruff profile image
miniscruff

These are sort of byproducts of existing metrics. I think of them as the result of having a solid codebase that is sort of abstract but very tangible when you get in and do some work.

YAGNI or "You aren't going to need it": When looking at the codebase, take the number of features/conditions coded and divide it by the number of features/conditions that are used by your users at this moment. You want a value as close to 1 as possible. Note that you should take into account features designed, developed and deployed but is not popular as unused. Such that if you developed something and only 0.1% of users know and use it, it is basically not used. This can maybe be hard to judge but it is possible.

SOLID: Hard to judge but if you as a developer, or as a manager request of your developers, to do a new feature or system and they respond with something like. "That sounds good, but we will need to do a bit of refactoring" or "that is not possible and will take a lot of work to get that". This is a sign the code base is probably not built on a solid ( see what I did their? ) foundation. Note that SOLID is not the only method of doing so, and may not fit for your language, framework or patterns.

DRY or "Don't repeat yourself": This one is similar to YAGNI, as part of most ( if not all ) code coverage analysis tools you get two important values. The number of statements and the number of branches. I am always trying to maximize the ratio of statements and branches over features delivered. That is, how much value to the user can I get with the least amount of code written. Preferring to use statements over lines as lines can be deceiving value in a lot of languages. You tend to want this value to be lower as it means you add as much value per statement and branch as possible.

These metrics can be a bit abstract when it comes to features or value to the user. But you can usually ground it in some tangible value as an approximation such as the number of rest API endpoints ( assuming they are all used ), average time delta from bug report to deployed fix, developer onboarding and ramp up time. Combined with some quality metrics like uptime, e2e response time, memory usage or similar you can get a lot of measurements.