Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Subway Uncertainty vs Coconut Uncertainty

From the MITSloan Management Review article on “Why Forecasts Fail” (2010) comes this nice little explanation of the different kinds of uncertainty you can face in forecasts (and elsewhere). There is subway uncertainty, which assumes a relatively narrow window of uncertainty. It’s called subway uncertainty because even on the worst day, your subway voyage almost definitely won’t take you more than say 30 minutes more than your plan (even if you are trying to navigate rush hour L trains). On the other end, there’s coconut uncertainty, which is a way to account for relatively common uncommon experiences (if that makes any sense). Here’s how the article explains the difference:

In technical terms, coconut uncertainty can’t be modeled statistically using, say, the normal distribution. That’s because there are more rare and unexpected events than, well, you’d expect. In addition, there’s no regularity in the occurrence of coconuts that can be modeled. And we’re not just talking about Taleb’s “black swans” — truly bizarre events that we couldn’t have imagined. There are also bubbles, recessions and financial crises, which may not occur often but do repeat at infrequent and irregular intervals. Coconuts, in our view, are less rare than you’d think. They don’t need to be big and hairy and come from space. They can also be small and prickly and occur without warning. Coconuts can even be positive: an inheritance from a long-lost relative, a lottery win or a yachting invitation from a rich client.

Knowing which one you’re working with and accounting for both is ultimately how you build a good forecast.

Also from the article is a great story into some research around the efficacy of simple versus complex models. A research in the 1970s collected a whole bunch of forecasts and compared how close they were to reality assuming that the more complex the model was, the more accurate it would be. The results, in the end, showed exactly the opposite, it’s the simpler models that outperformed. Here’s the statisticians attempt to explain the findings:

His rationale: Complex models try to find nonexistent patterns in past data; simple models ignore such “patterns” and just extrapolate trends. The professor also went on to repeat the “forecasting with hindsight” experiment many times over the years, using increasingly large sets of data and more powerful computers.ii But the same empirical truth came back each time: Simple statistical models are better at forecasting than complex ones.

January 6, 2016 // This post is about: , , ,

Simple, Complicated, Complex

Really like this explanation of complexity from Flash Boys by Michael Lewis:

“People think that complex is an advanced state of complicated,” said Zoran [Perkov, head of technology operations for IEX]. “It’s not. A car key is simple. A car is complicated. A car in traffic is complex.”

Well put.

Reminds me a bit of how Nassim Taleb draws a distinction between robustness, or the ability to withstand disorder, and antifragility, which he explains as actually growing stronger when exposed to those same disorderly forces.

Linked, an amazing book on networks which I’m re-reading explains the difference between Swiss watches and the internet:

Understanding the topology of the Internet is a prerequisite for designing tools and services that offer a fast and reliable communication infrastructure. Though human made, the Internet is not centrally designed. Structurally, the Internet is closer to an ecosystem than to a Swiss watch. Therefore, understanding the Internet is not only an engineering or a mathematical problem. In important ways, historical forces shaped its topology. A tangled tale of converging ideas and competing motivations left their mark on the Internet’s structure, creating a jumbled information mass for historians and computer scientists to unravel.

January 8, 2015 // This post is about: , , , ,

An Algorithm for the Economy

I’ve been reading quite a bit of Brian Arthur’s writing lately. He’s a “complexity economist” from the Santa Fe institute and a pretty interesting all around thinker. Need to put together a bigger writeup of his ideas, but wanted to share his steps on how technology forms the economy around itself from his book Complexity and the Economy:

The steps involved yield the following algorithm for the formation of the economy.

1. A novel technology appears. It is created from particular existing ones, and enters the active collection as a novel element.

2. The novel element becomes available to replace existing technologies and components in existing technologies.

3. The novel element sets up further “needs” or opportunity niches for supporting technologies and organizational arrangements.

4. If old displaced technologies fade from the collective, their ancillary needs are dropped. The opportunity niches they provide disappear with them, and the elements that in turn fill these may become inactive.

5. The novel element becomes available as a potential component in further technologies—further elements.

6. The economy—the pattern of goods and services produced and consumed—readjusts to these steps. Costs and prices (and therefore incentives for novel technologies) change accordingly.

November 20, 2014 // This post is about: , ,