It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
- Judas: How a Sister’s Testimony Brought Down a Criminal Mastermind (Astrid Holleeder): Inspired by the New Yorker story by Patrick Radden Keefe about a Dutch woman who eventually testified about her mobster brother, I decided to dig into the English translation. It was a lot more difficult to read than I expected. The New Yorker story, because of length, isn’t able to go into the extensive psychological abuse Holleeder’s brother put his family through. I found it emotionally exhausting about two-thirds into the book.
- Countdown to Zero Day (Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
- Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
- My Holiday in North Korea: The Funniest/Worst Place on Earth (Wendy Simmons): I really loved the graphic novel Pyongyang and thought I’d give this travelogue a try when I saw it sitting on a shelf at the bookstore. It was a fine book to read alongside some of the heavier stuff I’ve been reading lately.
- Remote: Office Not Required (Jason Fried): This book sucked, but at least the Audible narration was slow enough that I could crank it up to 2x speed.
- Einstein 1905: The Standard of Genius (John S. Rigden): Like I said, I’ve been falling deeper into a physics rabbit hole, and as part of that I’ve been watching a bunch of physics and math lectures on YouTube. One of the ones I watched was Douglas Hofstadter essentially trying to recreate a talk he once saw the John Rigden, the author of this book, give in 2005. The book, and the talk, are about the ideas behind Einstein’s five papers of 1905 (four of which are considered foundational in physics).
- The Undoing Project: A Friendship That Changed Our Minds (Michael Lewis): I am almost embarrassed to admit I still haven’t read Daniel Kahneman’s Thinking, Fast and Slow (it’s on the list, I swear), so Michael Lewis on the relationship between Kahneman and Taversky is the next best thing. Related: Malcolm Gladwell interviewing Lewis about the book.
- Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century (Masha Gessen): Masha Gessen’s biography (I guess you could call it that) of Grigori Perelman, the eccentric mathematician who solved the Poincare Conjecture (one of the seven Millenium Problems from the Clay Institute) and then disappeared.
- Jorge Luis Borges: The Last Interview: and Other Conversations (Jorge Luis Borges): A long and fascinating conversation with Borges.
- Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone (Satya Nadella): Like just about everyone, I’m super impressed with everything Microsoft has done since promoting Nadella to CEO. Although this book promises to be about how it’s all happening, it’s about 75% a commercial for Microsoft’s vision for the future (which although it could be right, is not particularly interesting or original).
- Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs (John Doerr): A mostly interesting read about the OKR (objectives and key results) goal setting system.
- A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
- Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
I’ve been fascinated by the mysterious attacks against Americans in Cuba since I read about them (probably over a year ago now). I was excited to see the New Yorker finally dig in.
We’ve been having lots of trouble convincing our three-year-old to wear a coat in the cold. Turns out its pretty normal.
The Chronicle of Higher Education asked a bunch of academics for their most influential academic book of the last twenty years. Lots of interesting things to read here.
This is from earlier in the year, but it’s worth re-reading Bruce Schneier’s piece on securing elections. More recently he had a good one on mobile phone security.
- Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
- Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
- When you get helium super cold very strange stuff starts happening.
- A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
- There’s a taxonomy of parking garage design (image below).
Barkley Marathons sound awful.
This hit close to home:
It took 200 years for them to start making brown point shoes for non-white ballet dancers …
There’s apparently a big conversation going on in the machine learning community about whether ML is alchemy:
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
This is a park covered in spiderwebs:
Tangentially related, here’s how corporate America contributes to politics by industry:
The Article Group email list is worth subscribing to. Back issues here.
I loved this quote from philosopher Daniel Dennet’s talk on what he calls intelligent design (don’t worry, it’s not the same):
Stochastic terrorism is one of those ideas you read once and think about from then on …
I don’t know where I fall on this, but I found Douglas Rushkoff’s argument that universal basic income is a scam being put forward by technology companies fascinating:
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
Adam Davidson had a good Twitter thread about “both-sidism” in political reporting.
Wired on “it’s not a bug, it’s a feature”.
The changing landscape of business expenses:
It seems like one out of 100 Player’s Tribune articles are amazing. This one from former Clipper Darius Miles fits the bill.
I’ve been really enjoying John Horgan’s Scientific American blog “Cross-Check”.
David Grann, who is probably my favorite author, snuck a book out without me knowing. Called White Darkness, it appears to be an expanded version of his New Yorker article about Antarctic explorers from earlier this year (one of my favorites).
Alright, I’m going to cut this here … I’m only caught up to late October, so look out for a part two. Thanks for reading.
Thanks again for reading and for all the positive feedback. Please keep it coming. If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.
Credit: Organizational Charts by Manu CornetI first ran into Conway’s Law while helping a brand redesign their website. The client, a large consumer electronics company, was insistent that the navigation must offer three options: Shop, Learn, and Support. I valiantly tried to convince them that nobody shopping on the web, or anywhere else, thought about the distinction between shopping and learning, but they remained steadfast in their insistence. What I eventually came to understand is that their stance wasn’t born out of customer need or insight, but rather their own organizational chart, which shockingly included a sales department, a marketing department, and a support department.
“Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” That’s the way computer scientist and software engineer Melvin Conway put it in a 1968 paper titled “How Do Committees Invent?” His point was that the choices we make before start designing any system most often fundamentally shapes the final output. Or, as he put it, “the very act of organizing a design team means that certain design decisions have already been made.”
Why does this happen, where does it happen, and what can we do about it? That’s the goal of this essay, but before I get there we’ve got to take a short sojourn into the history of the concept. As I mentioned, the idea in its current form came from Melvin Conway in May of 1968. In the article he cited a few key sources as inspiration including economist John Kenneth Galbraith and historian C. Northcote Parkinson, who’s 1957 book Parkinson’s Law and Other Studies in Administration was particularly influential in spelling out the ever-increasing complexity that any bureaucratic organization will create. Finally, judging by the focus on modularity in Conway’s writing, it seems clear he was also inspired by Herbert Simon’s work, in particular his “Architecture of Complexity” paper and the Parable of Two Watchmakers (which I wrote about earlier).
Parkinson aside (who did so mostly in jest), very few have the chutzpah to actually name a law after themselves and Conway wasn’t responsible for the law’s coining. That came a few months after the “Committees” article was published from a fan and fellow computer scientist George Mealy. In his paper for the July 1968 National Symposium on Modular Programming (which I seem to be one of the very few people to have actually tracked down), Mealy examined four bits of “conventional wisdom” that surrounded the development of software systems at the time. Number four came directly from Conway: “Systems resemble the organizations that produced them.” The naming comes 3 pages in:
Our third aphorism-“if one programmer can do it in one year, two programmers can do it in two years”-is merely a reflection of the great difficulty of communication in a large organization. The crux of the problem of giganticism [sic] and system fiasco really lies in the fourth dogma. This — “systems resemble the organizations that produced them” — has been noticed by some of us previously, but it appears not to have received public expression prior to the appearance of Dr. Melvin E. Conway’s penetrating article in the April 1968 issue of Datamation. The article was entitled “How Do Committees Invent?”. I propose to call my preceding paraphrase of the gist of Conway’s paper “Conway’s Law”.
While most, including Conway on his own website, credit Fred Brooks’ 1975 Mythical Man Month with naming the law, it seems that Mealy deserves the credit (though Brooks’ book is surely the reason so many know about Conway’s important concept).Back to the questions at hand: Why does this happen, where does it happen, and what can we do about it?
Let’s start with the why. This seems like it should be easy to answer, but it’s actually not. The answer starts with some basics of hierarchy and modularity that Herbert Simon offered up in his Parable of Two Watchmakers: Mainly, breaking a system down into sets of modular subsystems seems to be the most efficient design approach in both nature and organizations. For that reason we tend to see companies made up of teams which are then made up of more teams and so-on. But that still doesn’t answer the question of why they tend to design systems in their image. To answer that we turn to some of the more recent research around the “mirroring hypothesis,” which (in simplified terms) is an attempt to prove out Conway’s Law. Carliss Baldwin, a professor at Harvard Business School, seems to be spearheading much of this work and has been an author on two of the key papers on the subject. Most recently, “The mirroring hypothesis: theory, evidence, and exceptions” is a treasure trove of information and citations. Her theory as to why mirroring occurs is essentially that it makes life easier for everyone who works at the company:
The mirroring of technical dependencies and organizational ties can be explained as an approach to organizational problem-solving that conserves scarce cognitive resources. People charged with implementing complex projects or processes are inevitably faced with interdependencies that create technical problems and conflicts in real time. They must arrive at solutions that take account of the technical constraints; hence, they must communicate with one another and cooperate to solve their problems. Communication channels, collocation, and employment relations are organizational ties that support communication and cooperation between individuals, and thus, we should expect to see a very close relationship—technically a homomorphism—between a network graph of technical dependencies within a complex system and network graphs of organizational ties showing communication channels, collocation, and employment relations.
It’s all still a bit circular, but the argument that in most cases a mirrored product is both reasonably optimal from a design perspective (since organizations are structured with hierarchy and modularity) and also cuts down the cognitive load by making it easy for everyone to understand (because it works like an org they already understand) seems like a reasonable one. The paper then goes on to survey the research to understand what kind of industries mirroring is most likely to occur and the answer seems to be everywhere. They found evidence from across expected places like software and semiconductors, but also automotive, defense, sports, and even banking and construction. For what it’s worth, I’ve also seen it across industries in marketing projects throughout my own career.
That’s the why and the where, which only leaves us with the question of what an organization can do about it. Here there seem to be a few different approaches. The first one is to do nothing. After all, it may well be the best way to design a system for that organization/problem. The second is to find an appropriate balance. If you buy the idea that some part of mirroring/Conway’s Law is simply about making it easier to understand and maintain systems, than its probably good to keep some mirroring. But it doesn’t need to be all or nothing. In the aforementioned paper, Baldwin and her co-authors have a nice little framework for thinking about different approaches to mirroring depending on the kind of business:
As you see at the bottom of the framework you have option three: “Strategic mirror-breaking.” This is also sometimes called an “inverse Conway maneuver” in software engineering circles: An approach where you actually adjust your organizational model in order to change the way your systems are architected. Basically you attempt to outline the type of system design you want (most of the time it’s about more modularity) and you back into an org structure that looks like that.
In case it seems like all this might be academic, the architecture of organizations has been shown to have a fundamental on the company’s ability to innovate. Tim Harford recently wrote a piece for the Financial Times that heavily quotes a 1990 paper by an economist named Rebecca Henderson titled “Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms.” The paper outlines how the organizational structure of companies can prevent them from innovating in specific ways. Most specifically the paper describes the kind of innovation that keeps the shape of the previous generation’s product, but completely rewires it: Think film cameras to digital or the Walkman to MP3 players. Here’s Harford describing the idea:
Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.
A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.
Like I said before, it’s all quite circular. It’s a bit like the famous quote “We shape our tools and thereafter our tools shape us.” Companies organize themselves and in turn design systems that mirror those organizations which in turn further solidify the organizational structure that was first put in place. Conway’s Law is more guiding principle than physical property, but it’s a good model to keep in your head as you’re designing organizations or systems (or trying to disentangle them).
- Arrow, K. J. (1985). Informational structure of the firm. The American Economic Review, 75(2), 303-307.
- Brunton-spall, Michael (2 Nov. 2015.). The Inverse Conway Manoeuvre and Security – Michael Brunton-Spall – Medium. Medium. Retrieved from https://medium.com/@bruntonspall/the-inverse-conway-manoeuvre-and-security-55ee11e8c3a9
- Colfer, L. J., & Baldwin, C. Y. (2016). The mirroring hypothesis: theory, evidence, and exceptions. Industrial and Corporate Change, 25(5), 709-738.
- Conway, Melvin E. “How do committees invent.” Datamation 14.4 (1968): 28-31.
- Conway, Melvin E. “The Tower of Babel and the Fighter Plane.” Retrieved from http://melconway.com/keynote/Presentation.pdf
- Evans, Benedict (31 Aug. 2018.). Tesla, software and disruption. Benedict Evans. Retrieved from https://www.ben-evans.com/benedictevans/2018/8/29/tesla-software-and-disruption
- Galbraith, J. K. (2001). The essential galbraith. HMH.
- Harford, Tim. (6 Sept. 2018.). Why big companies squander good ideas. Financial Times. Retrieved from https://www.ft.com/content/3c1ab748-b09b-11e8-8d14-6f049d06439c
- Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative science quarterly, 9-30.
- Hvatum, L. B., & Kelly, A. (2005). What do I think about Conway’s Law now?. In EuroPLoP (pp. 735-750).
- Lee, J. A. (1995). International biographical dictionary of computer pioneers. Taylor & Francis.
- MacCormack, A., Baldwin, C., & Rusnak, J. (2012). Exploring the duality between product and organizational architectures: A test of the “mirroring” hypothesis. Research Policy, 41(8), 1309-1324.
- MacDuffie, J. P. (2013). Modularity‐as‐property, modularization‐as‐process, and ‘modularity’‐as‐frame: Lessons from product architecture initiatives in the global automotive industry. Global Strategy Journal, 3(1), 8-40.
- Mealy, George, “How to Design Modular (Software) Systems,” Proc. Nat’l. Symp. Modular Programming, Information & Systems Institute, July 1968.
- Newman, Sam. (30 Jun. 2014.). Demystifying Conway’s Law. ThoughtWorks. Retrieved from https://www.thoughtworks.com/insights/blog/demystifying-conways-law
- Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), 1053-1058.
- Software Engineering Radio. Kevin Goldsmith on Architecture and Organizational Design : Software Engineering Radio. Se-radio.net. Retrieved from http://www.se-radio.net/2018/07/se-radio-episode-331-kevin-goldsmith-on-architecture-and-organizational-design/
- Van Dusen, Matthew (19 May 2016.). A principle called “Conway’s Law” reveals a glaring, biased flaw in our technology. Quartz. Retrieved from https://qz.com/687457/a-principle-called-conways-law-reveals-a-glaring-biased-flaw-in-our-technology/
Framework of the Day posts:
Another framework of the day. If you haven’t read the others, the links are all at the bottom. I’m working on a book of mental models and sharing some of the research and writing as I go. This post actually started in writing about Conway’s Law, which is coming soon. I felt like I had to get this out first, as I would need to rely on some of the research in giving the Law its due. Thanks for reading and please let me know what you think, pass this link on, and subscribe to the email if you haven’t done it already. Thanks for reading.
This framework is a little different than the ones before as it doesn’t come with a nice diagram or four box. Rather, the Parable of Two Watchmakers is just that: A story about two people putting together complicated mechanical objects. The parable comes from a paper called “The Architecture of Complexity” written by Nobel-prize winning economist Herbert Simon (you might remember Simon from the theory of satisficing). Beyond being a brilliant economist, Simon was also a major thinker in the worlds of political science, psychology, systems, complexity, and artificial intelligence (in doing this research he climbed up the ranks of my intellectual heroes).
In his 1962 he laid out an argument for how complexity emerges, which is largely focused on the central role of hierarchy in complex systems. To start, let’s define hierarchy so we’re all on the same page. Here’s Simon:
Etymologically, the word “hierarchy” has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a “boss” and a set of subordinate subsystems. Each of the subsystems has a “boss” who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems. (In fact, even in human organizations, the formal hierarchy exists only on paper; the real flesh-and-blood organization has many inter-part relations other than the lines of formal authority.) For lack of a better term, I shall use hierarchy in the broader sense introduced in the previous paragraphs, to refer to all complex systems analyzable into successive sets of subsystems, and speak of “formal hierarchy” when I want to refer to the more specialized concept.
So it’s more or less the way we think of it, except he is drawing a distinction to the formal hierarchy we see in an org chart where each subordinate has just one boss and the informal hierarchy that actually exists inside organizations, where subordinates interact in a variety of ways. And he points out the many complex systems we find hierarchy, including biological systems, “The hierarchical structure of biological systems is a familiar fact. Taking the cell as the building block, we find cells organized into tissues, tissues into organs, organs into systems. Moving downward from the cell, well-defined subsystems — for example, nucleus, cell membrane, microsomes, mitochondria, and so on — have been identified in animal cells.”
The question is why did all these systems come to be arranged this way and what can we learn from them? Here Simon turns to story:
Let me introduce the topic of evolution with a parable. There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently — new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?
The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down — to answer the phone say— it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him, the more difficult it became for him to find enough uninterrupted time to finish a watch.
The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus.
Whether the complexity emerges from the hierarchy or the hierarchy from the complexity, he illustrates clearly why we see this pattern all around us and articulates the value of the approach. It’s not just hierarchy, he goes on to explain, but also modularity (which he refers to as near-decomposability) that appears to be a fundamental property of complex systems. That is, each of the subsystems operates both independently and as part of the whole. As Simon puts it, “Intra-component linkages are generally stronger than intercomponent linkages” or, even more simply, “In a formal organization there will generally be more interaction, on the average, between two employees who are members of the same department than between two employees from different departments.”
Why is that? Well, for one, it’s an efficiency thing. Just as we see inside organizations, we want to use specialized resources in a specialized way. But beyond that, as Simon outlines in the parable, it’s also about resiliency: By relying on subsystems you have a defense against catastrophic failure when one piece of the whole breaks down. Just as Hora was able to quickly start building again when he put something down, any system made up of subsystems should be much more capable of dealing with changes in environment. It works in organisms, companies, and even empires, as Simon pointed out in The Sciences of the Artificial:
We have not exhausted the categories of complex systems to which the watchmaker argument can reasonably be applied. Philip assembled his Macedonian empire and gave it to his son, to be later combined with the Persian subassembly and others into Alexander’s greater system. On Alexander’s death his empire did not crumble to dust but fragmented into some of the major subsystems that had composed it.
Hopefully the application of this framework is pretty clear (and also instructive) in every day business life. Interestingly, Simon’s theories were the ultimate inspiration for a management fad we saw burn bright (and flame out) just a few years ago: Holacracy, the fluid organizational structure made up of self-organizing teams. Invented by Brian Robertson and made famous by Tony Hsieh and Zappos, the method (it’s a registered trademark) is based on ideas about “holons” from Hungarian author and journalist Arthur Koestler. In his 1967 book The Ghost in the Machine, Koestler repeats Simon’s story of Tempus and Hora and then goes on to theorize that holons (a name he coined “from the Greek holos—whole, with the suffix on (cf. neutron, proton) suggesting a particle or part”) are “meant to supply the missing link between atomism and holism, and to supplant the dualistic way of thinking in terms of ‘parts’ and ‘wholes,’ which is so deeply engrained in our mental habits, by a multi-levelled, stratified approach. A hierarchically-organized whole cannot be “reduced” to its elementary parts; but it can be ‘dissected’ into its constituent branches of holons, represented by the nodes of the tree-diagram, while the lines connecting the holons stand for channels of communication, control or transportation, as the case may be.”
Holacracy aside, there’s a ton of goodness in the parable and the architecture of modularity that it posits as critical. It’s not an accident that every company is built this way and as we think about those companies designing systems, it’s also not surprising many of those should also follow suit (a good lead-in for Conway’s Law, which is up next). Although I’m pretty out of words at this point, Simon also applies the same hierarchy/modularity concept to problem solving and there’s a pretty good argument to be made that the “latticework of models” Charlier Munger described in his 1994 USC Business School Commencement Address would fit the framework.
- Egidi, Massimo, and Luigi Marengo. “Cognition, institutions, near decomposability: rethinking Herbert Simon’s contribution.” (2002).
- Egidi, Massimo. “Organizational learning, problem solving and the division of labour.” Economics, bounded rationality and the cognitive revolution. Aldershot: Edward Elgar (1992): 148-73.
- Koestler, Arthur, and John R. Smythies. Beyond Reductionism, New Perspectives in the Life Sciences [Proceedings of] the Alpbach Symposium . (1972).
- Koestler, Arthur. “The ghost in the machine.” (1967).
- Radner, Roy. “Hierarchy: The economics of managing.” Journal of economic literature 30.3 (1992): 1382-1415.
- Simon, Herbert A. “Near decomposability and the speed of evolution.” Industrial and corporate change 11.3 (2002): 587-599.
- Simon, Herbert A. “The Architecture of Complexity.” Proceedings of the American Philosophical Society 106.6 (1962): 467-482.
- Simon, Herbert A. “The science of design: Creating the artificial.” Design Issues (1988): 67-82.
- Simon, Herbert A. The sciences of the artificial. MIT press, 1996.
Framework of the Day posts:
From the MITSloan Management Review article on “Why Forecasts Fail” (2010) comes this nice little explanation of the different kinds of uncertainty you can face in forecasts (and elsewhere). There is subway uncertainty, which assumes a relatively narrow window of uncertainty. It’s called subway uncertainty because even on the worst day, your subway voyage almost definitely won’t take you more than say 30 minutes more than your plan (even if you are trying to navigate rush hour L trains). On the other end, there’s coconut uncertainty, which is a way to account for relatively common uncommon experiences (if that makes any sense). Here’s how the article explains the difference:
In technical terms, coconut uncertainty can’t be modeled statistically using, say, the normal distribution. That’s because there are more rare and unexpected events than, well, you’d expect. In addition, there’s no regularity in the occurrence of coconuts that can be modeled. And we’re not just talking about Taleb’s “black swans” — truly bizarre events that we couldn’t have imagined. There are also bubbles, recessions and financial crises, which may not occur often but do repeat at infrequent and irregular intervals. Coconuts, in our view, are less rare than you’d think. They don’t need to be big and hairy and come from space. They can also be small and prickly and occur without warning. Coconuts can even be positive: an inheritance from a long-lost relative, a lottery win or a yachting invitation from a rich client.
Knowing which one you’re working with and accounting for both is ultimately how you build a good forecast.
Also from the article is a great story into some research around the efficacy of simple versus complex models. A research in the 1970s collected a whole bunch of forecasts and compared how close they were to reality assuming that the more complex the model was, the more accurate it would be. The results, in the end, showed exactly the opposite, it’s the simpler models that outperformed. Here’s the statisticians attempt to explain the findings:
His rationale: Complex models try to find nonexistent patterns in past data; simple models ignore such “patterns” and just extrapolate trends. The professor also went on to repeat the “forecasting with hindsight” experiment many times over the years, using increasingly large sets of data and more powerful computers.ii But the same empirical truth came back each time: Simple statistical models are better at forecasting than complex ones.
Really like this explanation of complexity from Flash Boys by Michael Lewis:
“People think that complex is an advanced state of complicated,” said Zoran [Perkov, head of technology operations for IEX]. “It’s not. A car key is simple. A car is complicated. A car in traffic is complex.”
Reminds me a bit of how Nassim Taleb draws a distinction between robustness, or the ability to withstand disorder, and antifragility, which he explains as actually growing stronger when exposed to those same disorderly forces.
Linked, an amazing book on networks which I’m re-reading explains the difference between Swiss watches and the internet:
Understanding the topology of the Internet is a prerequisite for designing tools and services that offer a fast and reliable communication infrastructure. Though human made, the Internet is not centrally designed. Structurally, the Internet is closer to an ecosystem than to a Swiss watch. Therefore, understanding the Internet is not only an engineering or a mathematical problem. In important ways, historical forces shaped its topology. A tangled tale of converging ideas and competing motivations left their mark on the Internet’s structure, creating a jumbled information mass for historians and computer scientists to unravel.