N

You have arrived at the web home of Noah Brier. This is mostly an archive of over a decade of blogging and other writing. You can read more about me or get in touch. If you want more recent writing of mine, most of that is at my BrXnd marketing x AI newsletter and Why Is This Interesting?, a daily email for the intellectually omnivorous.

January, 2016

Subway Uncertainty vs Coconut Uncertainty

Different kinds of uncertainty and the efficacy of simple versus complex forecasting models.

From the MITSloan Management Review article on "Why Forecasts Fail" (2010) comes this nice little explanation of the different kinds of uncertainty you can face in forecasts (and elsewhere). There is subway uncertainty, which assumes a relatively narrow window of uncertainty. It's called subway uncertainty because even on the worst day, your subway voyage almost definitely won't take you more than say 30 minutes more than your plan (even if you are trying to navigate rush hour L trains). On the other end, there's coconut uncertainty, which is a way to account for relatively common uncommon experiences (if that makes any sense). Here's how the article explains the difference:

In technical terms, coconut uncertainty can’t be modeled statistically using, say, the normal distribution. That’s because there are more rare and unexpected events than, well, you’d expect. In addition, there’s no regularity in the occurrence of coconuts that can be modeled. And we’re not just talking about Taleb’s “black swans” — truly bizarre events that we couldn’t have imagined. There are also bubbles, recessions and financial crises, which may not occur often but do repeat at infrequent and irregular intervals. Coconuts, in our view, are less rare than you’d think. They don’t need to be big and hairy and come from space. They can also be small and prickly and occur without warning. Coconuts can even be positive: an inheritance from a long-lost relative, a lottery win or a yachting invitation from a rich client.

Knowing which one you're working with and accounting for both is ultimately how you build a good forecast.

Also from the article is a great story into some research around the efficacy of simple versus complex models. A research in the 1970s collected a whole bunch of forecasts and compared how close they were to reality assuming that the more complex the model was, the more accurate it would be. The results, in the end, showed exactly the opposite, it's the simpler models that outperformed. Here's the statisticians attempt to explain the findings:

His rationale: Complex models try to find nonexistent patterns in past data; simple models ignore such “patterns” and just extrapolate trends. The professor also went on to repeat the “forecasting with hindsight” experiment many times over the years, using increasingly large sets of data and more powerful computers.ii But the same empirical truth came back each time: Simple statistical models are better at forecasting than complex ones.
January 6, 2016
©
Noah Brier | Thanks for reading. | Don't fake the funk on a nasty dunk.