Confidence Levels and Calibration


calibration

Over the past several posts I’ve been discussing how networkers can reduce supposed “immeasurables” or “intangibles” to something that can in fact be measured, and I’ve been using Douglas Hubbard’s excellent book How to Measure Anything: Finding the Value of Intangibles in Business as a guide. I highly recommend that you pick up a copy of the book if you want more details about the approach I’ve been discussing. For a small book, it covers a lot of ground.

The
previous post looked at a general approach for working out the definition of an intangible, as it applies to your specific project, to the degree that it becomes a set of measurable components. In a nutshell, we ask the simple question, “What do we mean by…?” And we keep asking that question until we’ve boiled our definition down to a set of measurable specifics.

In the process of answering that question (a simple question that can take more than a little effort to answer), you will probably discover that you know more than you thought you did. Remember,
measurement is a reduction in uncertainty. So even if you can’t quantify something exactly, you can come up with an estimate based on what you do know.

But how confident are you in your estimate? 90%? 80%? 70%? Obviously if it is an estimate your confidence level is going to be something less than 100%, but being able to state your level of confidence helps you know whether your estimate is good enough, or whether you need to do more research to further reduce your uncertainty. It can also increase your chances of getting projects funded.

Here’s an interesting exercise. For each of the following questions, jot down a low and high number such that you are 90% confident that the right answer lies somewhere between those numbers:

  • In what year did John Steinbeck publish East of Eden?
  • How tall is the Taipei 101 building?
  • What is the average lifespan of a bottlenose dolphin?
  • How many companies are listed with the Hong Kong Stock Exchange?
  • How many days does it take for Mars to orbit the Sun?
  • In what year did Qin Shi Huang become the first emperor of China?
  • What is the annual salary of the Chancellor of Germany?
  • In what year did Chandragupta die?
  • How many feature films did Alfred Hitchcock direct?
  • How long is the Nile River?

Your answers are based on what you know. For example, maybe you’ve never heard of John Steinbeck. But based on the name you probably can deduce that he is a European or American; therefore you might be 90% confident that he published
East of Eden sometime between 1700 and 2008. Or maybe you know that Steinbeck was a 20th century American author, so you are 90% confident that the book came out between 1910 and 1995. Or maybe you know that Steinbeck died in the 1960s and that he wrote East of Eden in the latter part of his career, so you might be 90% confident that the book was published between 1945 and 1965.

You can, of course, say that Steinbeck published
East of Eden sometime between 200BC and 2009AD, or that the Taipei 101 building is between 10 and 10,000 meters tall, just to guarantee that the real answer is within the range. But those ranges are so wide that they are of no value at all (and I’ll write about the value of information in the next post); the ranges are also probably not really your 90% confidence level.

weatherman

The next part of the exercise is what Hubbard calls the “
Equivalent Bet Test.” Here’s how it works:

Imagine a roulette wheel in which there are ten pockets: One pocket is red and the other nine (90% of the pockets) are black. Then you are given a choice:

  • You can choose the answer to one of the questions above. If the real answer falls within the range you selected, you win $1000.
  • Or, you can choose a spin of the roulette wheel. If the ball falls on one of the 9 black pockets, you win $1000.

So which do you choose, your answer to the question or the wheel? If you choose the wheel, you don’t have really 90% confidence in your answer. You are overconfident. If you choose the answer instead of the wheel, your real confidence level is higher than 90%. You are underconfident. If you truly are 90% confident in your answer, you should not care whether you choose the answer or the wheel. Either way you have a 90% chance of winning $1000.

If you are overconfident you can reduce the number of black pockets on the wheel: 80%, 70%, 60%, and so on. The point at which you don’t care whether you bet on the wheel or your answer is the real confidence level you have in your answer.

Some people are much better at determining confidence levels than others. Not surprisingly, bookies and odds makers are very good at it. It turns out, according to many studies, business people and (distressingly) doctors are not. But what’s more interesting is that people can be trained –
calibrated – to accurately determine their confidence level. Douglas Hubbard writes that he can calibrate people in about half a day, and his book includes an appendix full of tests to help you improve your estimating skills.

If you are a good estimator, you will be right 90% of the time when you express a 90% confidence level, 80% of the time when you express an 80% confidence level, and so on.

So, why is all of this important? Because an accurate confidence level reveals your level of uncertainty. If you are 70% confident in an answer, your uncertainty is 30%. And that brings us back around to the fact that a measurement is a reduction in uncertainty. So if you are 40% uncertain whether what you know is correct, you might need to take more accurate measurements. If you are 10% uncertain, perhaps you are close enough and additional measurements are unnecessary. There is generally some point after which the accuracy of measurements becomes less and less useful and the cost of the measurements becomes more and more expensive.

Finding that point at which your measurements are close enough – determining the
value of information – is the topic of my next post.