From the MITSloan Management Review article on “Why Forecasts Fail” (2010) comes this nice little explanation of the different kinds of uncertainty you can face in forecasts (and elsewhere). There is subway uncertainty, which assumes a relatively narrow window of uncertainty. It’s called subway uncertainty because even on the worst day, your subway voyage almost definitely won’t take you more than say 30 minutes more than your plan (even if you are trying to navigate rush hour L trains). On the other end, there’s coconut uncertainty, which is a way to account for relatively common uncommon experiences (if that makes any sense). Here’s how the article explains the difference:
In technical terms, coconut uncertainty can’t be modeled statistically using, say, the normal distribution. That’s because there are more rare and unexpected events than, well, you’d expect. In addition, there’s no regularity in the occurrence of coconuts that can be modeled. And we’re not just talking about Taleb’s “black swans” — truly bizarre events that we couldn’t have imagined. There are also bubbles, recessions and financial crises, which may not occur often but do repeat at infrequent and irregular intervals. Coconuts, in our view, are less rare than you’d think. They don’t need to be big and hairy and come from space. They can also be small and prickly and occur without warning. Coconuts can even be positive: an inheritance from a long-lost relative, a lottery win or a yachting invitation from a rich client.
Knowing which one you’re working with and accounting for both is ultimately how you build a good forecast.
Also from the article is a great story into some research around the efficacy of simple versus complex models. A research in the 1970s collected a whole bunch of forecasts and compared how close they were to reality assuming that the more complex the model was, the more accurate it would be. The results, in the end, showed exactly the opposite, it’s the simpler models that outperformed. Here’s the statisticians attempt to explain the findings:
His rationale: Complex models try to find nonexistent patterns in past data; simple models ignore such “patterns” and just extrapolate trends. The professor also went on to repeat the “forecasting with hindsight” experiment many times over the years, using increasingly large sets of data and more powerful computers.ii But the same empirical truth came back each time: Simple statistical models are better at forecasting than complex ones.