# DEV3L on How To Measure Anything

Justin Beall Updated on ・4 min read

I usually listen to at least one Audiobook a month (thanks Audible subscription!). I write/voice a few notes as I listen to these books.

I intend to do this more frequently. My notes for this book were sporadic and scrambled, but in the spirit of measuring anything, I hope this may reduce some uncertainty about the book.

How to Measure Anything was recommended to me by a co-worker. Lean-Agile practices encourage us to measure outcomes of stories, but we never get told how to about measuring these, often intangible, outcomes!

Definition of Measurement: A quantitatively expressed reduction of uncertainty based upon one or more observations.

The book makes a bold statement, anything can be measured. A measurement should be used to aid in the making of a decision, otherwise it is just information. A measurement is typically a distribution of values.

If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how fuzzy, it is still a measurement if it tells you more than you knew before. This is critical as we are traditionally taught to believe in exact measurements or bust.

Business people are often taught, the lack of knowing an exact number is equivalent to not knowing it all (this is bull shit).

When faced with a new measurement problem, instead of being overwhelmed by uncertainty, began asking what you do know. Listing out the things you don't know can also be a useful exercise. Only a few things matter, but they usually matter a lot. Start with the internet!

Risk does not equal uncertainty. The author, Douglas Hubbard, describes an algorithm for reducing uncertainty in any situation.

Applied Information Economics (AIE)

• Define a problem and the relevant variables
• Determine what you know
• Pick a variable, and determine the value of additional information
• Apply measurements to the high-information-value variable
• Make a decision and act on it

Generating a subjective confidence interval (CI) is a great idea. Draw upon the information you already have to create a range of values, no matter how wide, from which you have a 90% certainty the the variable will likely fall within. Even starting off with values that are extreme on your upper and lower bounds help, Hubbard calls this the absurdity test.

With a little bit of practice and calibration, the author demonstrates how these value estimates can start to become highly accurate. He uses the example of a bookie, estimating subjective probability is a skill that can be sharpened!

For example, you may not know how much something costs, but it will most likely be greater than zero and less than some arbitrary amount (say one million dollars). Then start pruning the values inward on both extremes using the current information you have at hand.

Objective probability is hard to apply in the real world as it requires exact numbers to do the calculations. Subjective probability is far less used, but is explicitly apart of almost every single decision we make!

The likely outcome of a given situation with estimated probabilities can be determined using the Monte Carlo simulation technique. It uses randomly generated scenarios (thousands of them) substituting values in for your unknown probabilities. Google search Monte Carlo measurements, it's a sweet tool to add to your belt.

The risk paradox, we tend to apply quantitative risk analysis tool o risk processes such as manufacturing instead of complex like it development. Calculating the risk of a linear task is far less valuable than the return on value for a risk calculation of a creative task. The economic value of measuring a variable is usually inversely proportional to how much attention gets.

Catch and re-catch sampling is effective way to reduce uncertainty. For example if you catch a thousand fish and tag them, release them back into the wild, then take another random thousand distribution of fish. You can then calculate a confidence interval on the total number of fish in a lake based upon the percentage fish that are tagged.

Humans instinctively use Bayesian analysis. We apply heuristic models to current assessments of risk on a daily basis. Do not use recency bias when including new information into decision making!

Heterogeneous benchmarking - getting a sense of scale that is used in a measurement. For example, when asked how many grams are in a jelly bean it is difficult to measure. BUT, if you are told a paperclip weighs a gram, the jelly bean problem becomes much easier to estimate.

The big measurement don't, don't use a measurement that adds more error to the initial estimate.

When it comes to mashups of data, we are limited only by a resourcefulness!

The book was great. Listening to it in Audio form makes some of the math examples harder to follow, but at least I know where to look when I come across my next true measurement problem!