This being Giving Tuesday, I thought it was appropriate to write up something up about a non-profit I think is worth supporting: Your local library. As always, if you enjoy these posts please sign up for my email to not miss any posts and always feel free to share with friends. Thanks.
At the beginning of the year I decided I was going to try to read more books this year. I don’t remember it was a resolution or what, but I set a goal of thirty and set on my way, tracking everything on Goodreads (which has legitimately become one of my favorite networks over the last eleven months). This post isn’t about my book list, though (I’ll wrap that up at the end of the year), but rather about the library. Most of the books I’ve read this year have been borrowed using Libby, the app offered by Overdrive which manages e-book borrowing for most libraries (including the New York Public and Brooklyn Public, where I belong).
When I’ve told people about my library habits I’ve gotten two reactions: There’s a group who is amazed that you can borrow Kindle books and promises to immediately go out and get themselves a card and there’s another who tells me they tried it, but the borrowing just didn’t work for them. They can never find books they want, they explain, and when they do finally find a good ebook to borrow, they are always on hold. I can’t say much about not finding books you want except that I’ve managed to find lots books this year that were both well worth reading and immediately available. But that’s not the point of this post. I want to talk about book holds and how they’re better thought of as a feature of the library, not a bug. Let me explain.
Back in 2006 I vividly remember reading about behavioral economics for the first time. I had somehow run across an article about it from Harvard Magazine and there was one bit in particular, about how pre-committing to something can help us work against our instinct to take the easy way out, that fascinated me at the time and still rattles around in my brain to this day. The basic idea, now commonly understood with the rising prominence of behavioral economics, is that humans do a very bad job of the value of things in the future. As a result, we are constantly doing things that give us pleasure in the short term and not the long term. In other words, we promise tomorrow’s self it will read that important book, watch that critically acclaimed film, or finally hit the gym while today’s self enjoys that trashy novel, watches another dumb sitcom episode, and drinks a few beers with friends instead of exercising.
But there’s a trick to dealing with our irrationality and it’s called pre-committing. The article offered up an analogy by way of Homer:
The goddess Circe informs Odysseus that his ship will pass the island of the Sirens, whose irresistible singing can lure sailors to steer toward them and onto rocks. The Sirens are a marvelous metaphor for human appetite, both in its seductions and its pitfalls. Circe advises Odysseus to prepare for temptations to come: he must order his crew to stopper their ears with wax, so they cannot hear the Sirens’ songs, but he may hear the Sirens’ beautiful voices without risk if he has his sailors lash him to a mast, and commands them to ignore his pleas for release until they have passed beyond danger. “Odysseus pre-commits himself by doing this,” Laibson explains.
Sometimes, as the analogy goes, we’ve got to bind ourselves to the mast of what’s good for us to actually make it happen. Back in 2006 my favorite example of pre-commitment was Netflix. If you can remember back to its days before life as a streaming service, you put a DVD at the bottom of your queue as you chose it and it slowly moved up the list as you watched and returned movies. The beauty of the system was that it disconnected what you wanted to watch from what you actually watched by splitting them up as two different functions (largely the result of needing to mail out DVDs). What it meant for me was that I watched a bunch of great films I’d always wanted to see because they showed up in my mailbox and I didn’t have another good choice. Instead of just watching another mindless procedural crime drama (not that there’s anything wrong with that), I finally got around to watching films from Alfred Hitchcock, Woody Allen, Orson Welles, and a bunch of other filmmakers that had been permanently relegated to deep depths of my mental movie queue.
So back to the library. If you haven’t borrowed an e-book before it works just like borrowing a physical one: The library has a set number of digital copies and if they’re all out at the moment then you get put on a waitlist. The longest you can borrow a book is 21 days and there are no renewals. That means holds often come through at inopportune times. Sometimes that means skipping the book altogether, but more often I’ve found it was just the push I needed to read something I wanted to read in the past but wouldn’t have necessarily made time for in the present.
In other words, in case you needed a reason to appreciate the public library outside the amazing civic resource it is (one branch of the New York Public Library announced they were lending ties, briefcases, and handbags to people who needed them for job interviews earlier this year), the borrowing mechanism can actually help you fight some of your more irrational tendencies.
Finally, because I can’t resist, if you appreciate the library it’s worth giving a donation if you can afford it. If you think of all the money you spend on Netflix and the like, it’s hopefully not too much of a hardship to offer your local library a few dollars a month. They’d surely appreciate it.
It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
- Judas: How a Sister’s Testimony Brought Down a Criminal Mastermind (Astrid Holleeder): Inspired by the New Yorker story by Patrick Radden Keefe about a Dutch woman who eventually testified about her mobster brother, I decided to dig into the English translation. It was a lot more difficult to read than I expected. The New Yorker story, because of length, isn’t able to go into the extensive psychological abuse Holleeder’s brother put his family through. I found it emotionally exhausting about two-thirds into the book.
- Countdown to Zero Day (Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
- Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
- My Holiday in North Korea: The Funniest/Worst Place on Earth (Wendy Simmons): I really loved the graphic novel Pyongyang and thought I’d give this travelogue a try when I saw it sitting on a shelf at the bookstore. It was a fine book to read alongside some of the heavier stuff I’ve been reading lately.
- Remote: Office Not Required (Jason Fried): This book sucked, but at least the Audible narration was slow enough that I could crank it up to 2x speed.
- Einstein 1905: The Standard of Genius (John S. Rigden): Like I said, I’ve been falling deeper into a physics rabbit hole, and as part of that I’ve been watching a bunch of physics and math lectures on YouTube. One of the ones I watched was Douglas Hofstadter essentially trying to recreate a talk he once saw the John Rigden, the author of this book, give in 2005. The book, and the talk, are about the ideas behind Einstein’s five papers of 1905 (four of which are considered foundational in physics).
- The Undoing Project: A Friendship That Changed Our Minds (Michael Lewis): I am almost embarrassed to admit I still haven’t read Daniel Kahneman’s Thinking, Fast and Slow (it’s on the list, I swear), so Michael Lewis on the relationship between Kahneman and Taversky is the next best thing. Related: Malcolm Gladwell interviewing Lewis about the book.
- Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century (Masha Gessen): Masha Gessen’s biography (I guess you could call it that) of Grigori Perelman, the eccentric mathematician who solved the Poincare Conjecture (one of the seven Millenium Problems from the Clay Institute) and then disappeared.
- Jorge Luis Borges: The Last Interview: and Other Conversations (Jorge Luis Borges): A long and fascinating conversation with Borges.
- Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone (Satya Nadella): Like just about everyone, I’m super impressed with everything Microsoft has done since promoting Nadella to CEO. Although this book promises to be about how it’s all happening, it’s about 75% a commercial for Microsoft’s vision for the future (which although it could be right, is not particularly interesting or original).
- Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs (John Doerr): A mostly interesting read about the OKR (objectives and key results) goal setting system.
- A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
- Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
I’ve been fascinated by the mysterious attacks against Americans in Cuba since I read about them (probably over a year ago now). I was excited to see the New Yorker finally dig in.
We’ve been having lots of trouble convincing our three-year-old to wear a coat in the cold. Turns out its pretty normal.
The Chronicle of Higher Education asked a bunch of academics for their most influential academic book of the last twenty years. Lots of interesting things to read here.
This is from earlier in the year, but it’s worth re-reading Bruce Schneier’s piece on securing elections. More recently he had a good one on mobile phone security.
- Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
- Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
- When you get helium super cold very strange stuff starts happening.
- A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
- There’s a taxonomy of parking garage design (image below).
Barkley Marathons sound awful.
This hit close to home:
It took 200 years for them to start making brown point shoes for non-white ballet dancers …
There’s apparently a big conversation going on in the machine learning community about whether ML is alchemy:
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
This is a park covered in spiderwebs:
Tangentially related, here’s how corporate America contributes to politics by industry:
The Article Group email list is worth subscribing to. Back issues here.
I loved this quote from philosopher Daniel Dennet’s talk on what he calls intelligent design (don’t worry, it’s not the same):
Stochastic terrorism is one of those ideas you read once and think about from then on …
I don’t know where I fall on this, but I found Douglas Rushkoff’s argument that universal basic income is a scam being put forward by technology companies fascinating:
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
Adam Davidson had a good Twitter thread about “both-sidism” in political reporting.
Wired on “it’s not a bug, it’s a feature”.
The changing landscape of business expenses:
It seems like one out of 100 Player’s Tribune articles are amazing. This one from former Clipper Darius Miles fits the bill.
I’ve been really enjoying John Horgan’s Scientific American blog “Cross-Check”.
David Grann, who is probably my favorite author, snuck a book out without me knowing. Called White Darkness, it appears to be an expanded version of his New Yorker article about Antarctic explorers from earlier this year (one of my favorites).
Alright, I’m going to cut this here … I’m only caught up to late October, so look out for a part two. Thanks for reading.
If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.
The vast majority of the models I’ve written about were ones that I discovered at one time or another and have adopted for my own knowledge portfolio. The Variance Spectrum, on the other hand, I came up with. Its origin was in trying to answer a question about why there wasn’t a centralized “system of record” for marketing in the same way you would find one in finance (ERP) or sales (CRM). My best answer was that the output of marketing made it particularly difficult to design a system that could satisfy the needs of all its users. Specifically, I felt as though the variance of marketing’s output, the fact that each campaign and piece of content is meant to be different than the one that came before it, made for an environment that at first seemed opposed to the basics of systemization that the rest of a company had come to accept.
To illustrate the idea I plotted a spectrum. The left side represented zero variance, the realm of manufacturing and Six Sigma, and the right was 100 percent variance, where R&D and innovation reign supreme.
While the poles of the spectrum help explain it, it’s what you place in the middle that makes it powerful. For example, we could plot the rest of the departments in a company by the average variance of their output (finance is particularly low since so much of the department’s output is “governed” — quite literally the government sets GAAP accounting standards and mandates specific tax forms). Sales is somewhere in the middle: A pretty good mix of process and methodology plus the “art of the deal”. Marketing, meanwhile, sits off to the right, just behind R&D.
But that’s just the first layer. Like so many parts of an organization (and as described in my essays on both The Parable of Two Watchmakers and Conway’s Law), companies are hierarchical and at any point in the spectrum you can drill in and find a whole new spectrum of activities that range from low variance to high variance. That is, while finance may be “low variance” on average thanks to government standards, forecasting and modeling is most certainly a high variance function: Something that must be imagined in original ways depending on a number of variables include the company, and its products and markets (to name a few). Zooming in on marketing we find a whole new set of processes that can themselves be plotted based on the variance of their output, with governance far to the low variance side and creative development clearly on the other pole. Another way to articulate these differences is that the low variance side represents the routine processes and the right the creative.
While I haven’t seen anyone else plot things quite this way, this idea, that there are fundamentally different kinds of tasks within a company, is not new. Organizational theorists Richard Cyert, Herbert Simon, and Donald Trow, also noted this duality in paper from 1956 called “Observation of a Business Decision“:
At one extreme we have repetitive, well-defined problems (e.g., quality control or production lot-size problems) involving tangible considerations, to which the economic models that call for finding the best among a set of pre-established alternatives can be applied rather literally. In contrast to these highly programmed and usually rather detailed decisions are problems of a non-repetitive sort, often involving basic long-range questions about the whole strategy of the firm or some part of it, arising initially in a highly unstructured form and requiring a great deal of the kinds of search processes listed above. In this whole continuum, from great specificity and repetition to extreme vagueness and uniqueness, we will call decisions that lie toward the former extreme programmed, and those lying toward the latter end non-programmed. This simple dichotomy is just a shorthand for the range of possibilities we have indicated.
This also introduces an interesting additional way to think about the spectrum: The left side is representative of those ideas where you have the most clarity about the final goal (in manufacturing you know exactly what you want the output to look like when it’s done) and the right the most ambiguity (the goal of R&D is to make something new). For that reason, high variance tasks should also fail far more often than their low variance counterparts: Nine out of ten new product ideas might be a good batting average, but if you are throwing away 90 percent of your manufactured output you’ve massively failed.
Even though it may be tempting, that’s not a reason to focus purely on the well-structured, low-variance problems, as Richard Cyert laid out in a 1994 paper titled “Positioning the Organization“:
It is difficult to deal with the uncertainty of the future, as one must to relate an organization to others in the industry and to events in the economy that may affect it. One must look ahead to determine what forces are at work and to examine the ways in which they will affect the organization. These activities are less structured and more ambiguous than dealing with concrete problems and, therefore, the CEO may have trouble focusing on them. Many experiments show that structured activity drives out unstructured. For example, it is much easier to answer one’s mail than to develop a plan to change the culture of the organization. The implications of change are uncertain and the planning is unstructured. One tends to avoid uncertainty and to concentrate on structured problems for which one can correctly predict the solutions and implications.
Going a level deeper, another way to cut the left and right sides of the spectrum is based on the most appropriate way to solve the problem. For the routine tasks you want to have a single way of doing things in an attempt to push down the variance of the output while on the high variance side you have much more freedom to try different approaches. In software terms this can be expressed as automation and collaboration respectively.
While this is primarily a framework for thinking about process, there’s a more personal way to think about the variance spectrum as it relates to giving feedback to others. It’s a common occurrence that employees over-or-misinterpret the feedback of more senior members of the team. I experienced this many times myself in my role as CEO. Because words are often taken literally from the leader of a company, an aside about something like color choice in a design comp can be easily misconstrued as an order to change when it wasn’t meant that way. The variance spectrum in that context can be used to make explicit where the feedback falls: Is it a low variance order you expect to be acted on or a high variance comment that is simply your two cents? I found this could help avoid ambiguity and also make it more clear I respected their expertise.
- Cyert, R. M., Simon, H. A., & Trow, D. B. (1956). Observation of a business decision. The Journal of Business, 29(4), 237-248.
- Cyert, R. M. (1994). Positioning the organization. Interfaces, 24(2), 101-104.
- Dong, J., March, J. G., & Workiewicz, M. (2017). On organizing: an interview with James G. March. Journal of Organization Design, 6(1), 14.
- March, J. G. (2010). The ambiguities of experience. Cornell University Press.
- Simon, H. A. (2013). Administrative behavior. Simon and Schuster.
- Stene, E. O. (1940). An approach to a science of administration. American Political Science Review, 34(6), 1124-1137.
Framework of the Day posts:
Thanks again for reading and for all the positive feedback. Please keep it coming. If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.
Credit: Organizational Charts by Manu CornetI first ran into Conway’s Law while helping a brand redesign their website. The client, a large consumer electronics company, was insistent that the navigation must offer three options: Shop, Learn, and Support. I valiantly tried to convince them that nobody shopping on the web, or anywhere else, thought about the distinction between shopping and learning, but they remained steadfast in their insistence. What I eventually came to understand is that their stance wasn’t born out of customer need or insight, but rather their own organizational chart, which shockingly included a sales department, a marketing department, and a support department.
“Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” That’s the way computer scientist and software engineer Melvin Conway put it in a 1968 paper titled “How Do Committees Invent?” His point was that the choices we make before start designing any system most often fundamentally shapes the final output. Or, as he put it, “the very act of organizing a design team means that certain design decisions have already been made.”
Why does this happen, where does it happen, and what can we do about it? That’s the goal of this essay, but before I get there we’ve got to take a short sojourn into the history of the concept. As I mentioned, the idea in its current form came from Melvin Conway in May of 1968. In the article he cited a few key sources as inspiration including economist John Kenneth Galbraith and historian C. Northcote Parkinson, who’s 1957 book Parkinson’s Law and Other Studies in Administration was particularly influential in spelling out the ever-increasing complexity that any bureaucratic organization will create. Finally, judging by the focus on modularity in Conway’s writing, it seems clear he was also inspired by Herbert Simon’s work, in particular his “Architecture of Complexity” paper and the Parable of Two Watchmakers (which I wrote about earlier).
Parkinson aside (who did so mostly in jest), very few have the chutzpah to actually name a law after themselves and Conway wasn’t responsible for the law’s coining. That came a few months after the “Committees” article was published from a fan and fellow computer scientist George Mealy. In his paper for the July 1968 National Symposium on Modular Programming (which I seem to be one of the very few people to have actually tracked down), Mealy examined four bits of “conventional wisdom” that surrounded the development of software systems at the time. Number four came directly from Conway: “Systems resemble the organizations that produced them.” The naming comes 3 pages in:
Our third aphorism-“if one programmer can do it in one year, two programmers can do it in two years”-is merely a reflection of the great difficulty of communication in a large organization. The crux of the problem of giganticism [sic] and system fiasco really lies in the fourth dogma. This — “systems resemble the organizations that produced them” — has been noticed by some of us previously, but it appears not to have received public expression prior to the appearance of Dr. Melvin E. Conway’s penetrating article in the April 1968 issue of Datamation. The article was entitled “How Do Committees Invent?”. I propose to call my preceding paraphrase of the gist of Conway’s paper “Conway’s Law”.
While most, including Conway on his own website, credit Fred Brooks’ 1975 Mythical Man Month with naming the law, it seems that Mealy deserves the credit (though Brooks’ book is surely the reason so many know about Conway’s important concept).Back to the questions at hand: Why does this happen, where does it happen, and what can we do about it?
Let’s start with the why. This seems like it should be easy to answer, but it’s actually not. The answer starts with some basics of hierarchy and modularity that Herbert Simon offered up in his Parable of Two Watchmakers: Mainly, breaking a system down into sets of modular subsystems seems to be the most efficient design approach in both nature and organizations. For that reason we tend to see companies made up of teams which are then made up of more teams and so-on. But that still doesn’t answer the question of why they tend to design systems in their image. To answer that we turn to some of the more recent research around the “mirroring hypothesis,” which (in simplified terms) is an attempt to prove out Conway’s Law. Carliss Baldwin, a professor at Harvard Business School, seems to be spearheading much of this work and has been an author on two of the key papers on the subject. Most recently, “The mirroring hypothesis: theory, evidence, and exceptions” is a treasure trove of information and citations. Her theory as to why mirroring occurs is essentially that it makes life easier for everyone who works at the company:
The mirroring of technical dependencies and organizational ties can be explained as an approach to organizational problem-solving that conserves scarce cognitive resources. People charged with implementing complex projects or processes are inevitably faced with interdependencies that create technical problems and conflicts in real time. They must arrive at solutions that take account of the technical constraints; hence, they must communicate with one another and cooperate to solve their problems. Communication channels, collocation, and employment relations are organizational ties that support communication and cooperation between individuals, and thus, we should expect to see a very close relationship—technically a homomorphism—between a network graph of technical dependencies within a complex system and network graphs of organizational ties showing communication channels, collocation, and employment relations.
It’s all still a bit circular, but the argument that in most cases a mirrored product is both reasonably optimal from a design perspective (since organizations are structured with hierarchy and modularity) and also cuts down the cognitive load by making it easy for everyone to understand (because it works like an org they already understand) seems like a reasonable one. The paper then goes on to survey the research to understand what kind of industries mirroring is most likely to occur and the answer seems to be everywhere. They found evidence from across expected places like software and semiconductors, but also automotive, defense, sports, and even banking and construction. For what it’s worth, I’ve also seen it across industries in marketing projects throughout my own career.
That’s the why and the where, which only leaves us with the question of what an organization can do about it. Here there seem to be a few different approaches. The first one is to do nothing. After all, it may well be the best way to design a system for that organization/problem. The second is to find an appropriate balance. If you buy the idea that some part of mirroring/Conway’s Law is simply about making it easier to understand and maintain systems, than its probably good to keep some mirroring. But it doesn’t need to be all or nothing. In the aforementioned paper, Baldwin and her co-authors have a nice little framework for thinking about different approaches to mirroring depending on the kind of business:
As you see at the bottom of the framework you have option three: “Strategic mirror-breaking.” This is also sometimes called an “inverse Conway maneuver” in software engineering circles: An approach where you actually adjust your organizational model in order to change the way your systems are architected. Basically you attempt to outline the type of system design you want (most of the time it’s about more modularity) and you back into an org structure that looks like that.
In case it seems like all this might be academic, the architecture of organizations has been shown to have a fundamental on the company’s ability to innovate. Tim Harford recently wrote a piece for the Financial Times that heavily quotes a 1990 paper by an economist named Rebecca Henderson titled “Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms.” The paper outlines how the organizational structure of companies can prevent them from innovating in specific ways. Most specifically the paper describes the kind of innovation that keeps the shape of the previous generation’s product, but completely rewires it: Think film cameras to digital or the Walkman to MP3 players. Here’s Harford describing the idea:
Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.
A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.
Like I said before, it’s all quite circular. It’s a bit like the famous quote “We shape our tools and thereafter our tools shape us.” Companies organize themselves and in turn design systems that mirror those organizations which in turn further solidify the organizational structure that was first put in place. Conway’s Law is more guiding principle than physical property, but it’s a good model to keep in your head as you’re designing organizations or systems (or trying to disentangle them).
- Arrow, K. J. (1985). Informational structure of the firm. The American Economic Review, 75(2), 303-307.
- Brunton-spall, Michael (2 Nov. 2015.). The Inverse Conway Manoeuvre and Security – Michael Brunton-Spall – Medium. Medium. Retrieved from https://medium.com/@bruntonspall/the-inverse-conway-manoeuvre-and-security-55ee11e8c3a9
- Colfer, L. J., & Baldwin, C. Y. (2016). The mirroring hypothesis: theory, evidence, and exceptions. Industrial and Corporate Change, 25(5), 709-738.
- Conway, Melvin E. “How do committees invent.” Datamation 14.4 (1968): 28-31.
- Conway, Melvin E. “The Tower of Babel and the Fighter Plane.” Retrieved from http://melconway.com/keynote/Presentation.pdf
- Evans, Benedict (31 Aug. 2018.). Tesla, software and disruption. Benedict Evans. Retrieved from https://www.ben-evans.com/benedictevans/2018/8/29/tesla-software-and-disruption
- Galbraith, J. K. (2001). The essential galbraith. HMH.
- Harford, Tim. (6 Sept. 2018.). Why big companies squander good ideas. Financial Times. Retrieved from https://www.ft.com/content/3c1ab748-b09b-11e8-8d14-6f049d06439c
- Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative science quarterly, 9-30.
- Hvatum, L. B., & Kelly, A. (2005). What do I think about Conway’s Law now?. In EuroPLoP (pp. 735-750).
- Lee, J. A. (1995). International biographical dictionary of computer pioneers. Taylor & Francis.
- MacCormack, A., Baldwin, C., & Rusnak, J. (2012). Exploring the duality between product and organizational architectures: A test of the “mirroring” hypothesis. Research Policy, 41(8), 1309-1324.
- MacDuffie, J. P. (2013). Modularity‐as‐property, modularization‐as‐process, and ‘modularity’‐as‐frame: Lessons from product architecture initiatives in the global automotive industry. Global Strategy Journal, 3(1), 8-40.
- Mealy, George, “How to Design Modular (Software) Systems,” Proc. Nat’l. Symp. Modular Programming, Information & Systems Institute, July 1968.
- Newman, Sam. (30 Jun. 2014.). Demystifying Conway’s Law. ThoughtWorks. Retrieved from https://www.thoughtworks.com/insights/blog/demystifying-conways-law
- Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), 1053-1058.
- Software Engineering Radio. Kevin Goldsmith on Architecture and Organizational Design : Software Engineering Radio. Se-radio.net. Retrieved from http://www.se-radio.net/2018/07/se-radio-episode-331-kevin-goldsmith-on-architecture-and-organizational-design/
- Van Dusen, Matthew (19 May 2016.). A principle called “Conway’s Law” reveals a glaring, biased flaw in our technology. Quartz. Retrieved from https://qz.com/687457/a-principle-called-conways-law-reveals-a-glaring-biased-flaw-in-our-technology/
Framework of the Day posts:
I’m still hard at work on writing up Conway’s Law, so sharing something I wrote a few months ago that I haven’t posted yet. If you are following along, I’m working on a book about the frameworks we all use to understand the world and these are some drafts of the work. I appreciate any feedback and hope you’ll subscribe by email if you haven’t. Thanks for reading.
Most people know the Pareto principle by it’s more common name, “the 80/20 rule.” It’s story starts in the late-1800s with the Italian economist Vilfredo Pareto. Responsible for a number of economic breakthroughs, Pareto became particularly interested in the distribution of income. After collecting wealth and tax data from a variety of countries, he noticed a consistent pattern in the distribution. Originally outlined in his first major work, Cours d’Économie Politique, Pareto had discovered that across countries 20 percent of the population seemed to control around 80 percent of the income.
Source: “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (Translation: 2009)
Although he had uncovered the phenomena, Pareto wasn’t sure why it existed:
It is not easy to understand a priori how and why this should happen. As I said in my Cours, it seems to me probable that the income curve is in some way dependent on the law of the distribution of the mental and physiological qualities of a certain number of individuals. If such is really the case, we can catch a glimpse of the reason why approximately the same law is to be found in the most varied manifestations of human activity. But, instead of seeing those phenomena only in dim outlines, we would like to perceive them clearly and precisely, and up till now I have not succeeded in doing so.
The specifics of 80 and 20 aren’t critical, the point is that a small portion of a specific population tends to account for a large portion of some other resource. As time has gone on we’ve found evidence for Pareto’s discovery in more and more systems: Just a few scientific papers grab most of the citations, a small portion of a company’s customers tend be responsible for large percent of its profits, a tiny number of users tends to make up the vast majority of the customer service requests, and a “vital few” factory defects account for the bulk of the production issues.
It’s that last one about factories that we have to thank for the popularity of the Pareto principle. Quality control pioneer (and catchy name-coiner) Joseph Juran explains:
It was during the late 1940s, when I was preparing the manuscript for Quality Control Handbook, First Edition, that I was faced squarely with the need for giving a short name to the universal. In the resulting write-up under the heading “Maldistribution of Quality Losses,” I listed numerous instances of such maldistribution as a basis for generalization. I also noted that Pareto had found wealth to be maldistributed. In addition, I showed examples of the now familiar cumulative curves, one for maldistribution of wealth and the other for maldistribution of quality losses. The caption under these curves reads “Pareto’s principle of unequal distribution applied to distribution of wealth and to distribution of quality losses.”
Juran went on to become an important management thinker and the Pareto principle spread through industry and the broader world. At this point the 80/20 rule has become a basic and helpful mental model that many managers understand.
But we still haven’t answered Pareto’s original question: What it is about human nature that causes this massive imbalance to continually emerge in such a variety of systems? To answer that we turn to Albert-László Barabási and his study of networks. As the web was emerging, Barabási and his colleagues were busy analyzing the new and rich datasets it generated. Every time they dug in, the same odd pattern emerged.
In one of their studies, the team set up a crawler to look at how different web pages linked to each other. Expecting to see a bell curve, they instead spotted something very different: “the network our robot brought back from its journey had many nodes with a few links only, and a few hubs with an extraordinarily large number of links.” Barabási continues, “The biggest surprise came when we tried to fit the histogram of the node connectivity on a so-called log-log plot. The fit told us that the distribution of links on various Webpages precisely follows a mathematical expression called a power law.”
What made this discovery so important was that power laws are a signal that you’re not working with random data. If you chart random (or more precisely disconnected) data points, like the heights of people in your town or the scores of students on a test, you see a bell curve distribution. However, if you chart non-random interdependent data points you get the power curve that Barabási kept seeing:
Power laws rarely emerge in systems completely dominated by a roll of the dice. Physicists have learned that most often they signal a transition from disorder to order. Thus the power laws we spotted on the Web indicated, for the first time in precise mathematical terms, that real networks are far from random. Complex networks finally started to speak to us in a language that scientists trained in self-organization and complexity could finally understand. They spoke of order and emerging behavior. We just needed to listen carefully.
So we come full-circle back to Pareto, who once explained that, “The molecules in the social system are interdependent in space and in time. Their interdependence in space becomes apparent in the mutual relations that subsist between social phenomena.” The 80/20 rule is present in systems where there are self-organizing interdependent parts and its subject to the same cumulative advantage mechanics we saw with popular music. That’s why the pattern emerges so often in companies and markets: It means a huge number of forces are pushing and, critically, reacting to each other at the same time.
As should be reasonably obvious, the 80/20 rule has a number of important effects and implications for everyday business and life (many of which will come up in other models). First, understanding when you’re working in a system susceptible to the Pareto principle is critical. Once understood, being able to accurately isolate the 20 percent and find ways to make it less interdependent can fundamentally alter the balance of the equation. One of the simplest conclusions to be drawn from the 80/20 rule is that sometimes you need to fire a customer or an employee who is responsible for eating up the majority of your resources, as painful as that choice may be.
- Alexander, James. “Vilfredo Pareto: Sociologist and Philosopher.” Ihr.org. n.d. Web. 17 Dec. 2017. <http://www.ihr.org/jhr/v14/v14n5p10_Alexander.html>
- Aspers, Patrik. “Crossing the boundary of economics and sociology: The case of Vilfredo Pareto.” American Journal of Economics and Sociology 60.2 (2001): 519-545.
- Bunkley, Nick. “Joseph Juran, 103, Pioneer in Quality Control, Dies.” Nytimes.com. 3 Mar. 2008. Web. 17 Dec. 2017. <https://www.nytimes.com/2008/03/03/business/03juran.html>
- Chipman, John S. “Pareto: manuel of political economy.” English translation, available at http://www.econ.umn.edu/~jchipman/DALLOZ5.pdf, of ‘Pareto: Manuel di d’Économie Politique’ in Dictionnaire des grandes oeuvres d’économise, X Greffe, j. Lallemant and M De Vroey (eds), Paris: Dalloz (2002): 424-433.
- Cirillo, Renato. “Was Vilfredo Pareto Really a ‘Precursor’ of Fascism.?.” American Journal of Economics and Sociology 42.2 (1983): 235-246.
- Crawford, Walt. “Exceptional institutions: libraries and the Pareto principle.” American Libraries 32.6 (2001): 72-74.
- Edgeworth, F. Y., and Vilfredo Pareto. “Controversy Between Pareto and Edgeworth.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 425-440.
- Hazlitt, Henry. “Pareto’s Picture of Society: His Monumental Work Covers an Enormous Field of Knowledge.” New York Times (May 26, 1935).
- Juran, Joseph M. “Pareto, lorenz, cournot, bernoulli, juran and others.” (1950).
- Juran, Joseph, and A. Blanton Godfrey. “Quality handbook.” Republished McGraw-Hill (1999).
- Juran, Joseph M. “The non-Pareto principle; mea culpa.” Quality Progress 8.5 (1975): 8-9.
- Juran, Joseph M. “Universals in management planning and controlling.” Management Review 43.11 (1954): 748-761.
- Koch, Richard. The 80/20 principle: the secret to achieving more with less. Crown Business, 2011.
- Lopreato, Joseph. “Notes on the work of Vilfredo Pareto.” Social Science Quarterly (1973): 451-468.
- Mandelbrot, Benoit, and Richard L. Hudson. The Misbehavior of Markets: A fractal view of financial turbulence. Basic books, 2007.
- Moore, H. L. “Cours d’Économie Politique. By VILFREDO PARETO, Professeur à l’Université de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.” The ANNALS of the American Academy of Political and Social Science 9.3 (1897): 128-131.
- Pareto, Vilfredo. “Supplement to the Study of the Income Curve.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 441-451.
- Pareto, Vilfredo. “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (2009): 132-143.
- Pareto, Vilfredo. The mind and society: Trattato di sociologia generale. AMS Press, 1935.
- Tarascio, Vincent J. “The Pareto law of income distribution.” Social Science Quarterly (1973): 525-533.
Framework of the Day posts:
Another framework of the day. If you haven’t read the others, the links are all at the bottom. I’m working on a book of mental models and sharing some of the research and writing as I go. This post actually started in writing about Conway’s Law, which is coming soon. I felt like I had to get this out first, as I would need to rely on some of the research in giving the Law its due. Thanks for reading and please let me know what you think, pass this link on, and subscribe to the email if you haven’t done it already. Thanks for reading.
This framework is a little different than the ones before as it doesn’t come with a nice diagram or four box. Rather, the Parable of Two Watchmakers is just that: A story about two people putting together complicated mechanical objects. The parable comes from a paper called “The Architecture of Complexity” written by Nobel-prize winning economist Herbert Simon (you might remember Simon from the theory of satisficing). Beyond being a brilliant economist, Simon was also a major thinker in the worlds of political science, psychology, systems, complexity, and artificial intelligence (in doing this research he climbed up the ranks of my intellectual heroes).
In his 1962 he laid out an argument for how complexity emerges, which is largely focused on the central role of hierarchy in complex systems. To start, let’s define hierarchy so we’re all on the same page. Here’s Simon:
Etymologically, the word “hierarchy” has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a “boss” and a set of subordinate subsystems. Each of the subsystems has a “boss” who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems. (In fact, even in human organizations, the formal hierarchy exists only on paper; the real flesh-and-blood organization has many inter-part relations other than the lines of formal authority.) For lack of a better term, I shall use hierarchy in the broader sense introduced in the previous paragraphs, to refer to all complex systems analyzable into successive sets of subsystems, and speak of “formal hierarchy” when I want to refer to the more specialized concept.
So it’s more or less the way we think of it, except he is drawing a distinction to the formal hierarchy we see in an org chart where each subordinate has just one boss and the informal hierarchy that actually exists inside organizations, where subordinates interact in a variety of ways. And he points out the many complex systems we find hierarchy, including biological systems, “The hierarchical structure of biological systems is a familiar fact. Taking the cell as the building block, we find cells organized into tissues, tissues into organs, organs into systems. Moving downward from the cell, well-defined subsystems — for example, nucleus, cell membrane, microsomes, mitochondria, and so on — have been identified in animal cells.”
The question is why did all these systems come to be arranged this way and what can we learn from them? Here Simon turns to story:
Let me introduce the topic of evolution with a parable. There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently — new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?
The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down — to answer the phone say— it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him, the more difficult it became for him to find enough uninterrupted time to finish a watch.
The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus.
Whether the complexity emerges from the hierarchy or the hierarchy from the complexity, he illustrates clearly why we see this pattern all around us and articulates the value of the approach. It’s not just hierarchy, he goes on to explain, but also modularity (which he refers to as near-decomposability) that appears to be a fundamental property of complex systems. That is, each of the subsystems operates both independently and as part of the whole. As Simon puts it, “Intra-component linkages are generally stronger than intercomponent linkages” or, even more simply, “In a formal organization there will generally be more interaction, on the average, between two employees who are members of the same department than between two employees from different departments.”
Why is that? Well, for one, it’s an efficiency thing. Just as we see inside organizations, we want to use specialized resources in a specialized way. But beyond that, as Simon outlines in the parable, it’s also about resiliency: By relying on subsystems you have a defense against catastrophic failure when one piece of the whole breaks down. Just as Hora was able to quickly start building again when he put something down, any system made up of subsystems should be much more capable of dealing with changes in environment. It works in organisms, companies, and even empires, as Simon pointed out in The Sciences of the Artificial:
We have not exhausted the categories of complex systems to which the watchmaker argument can reasonably be applied. Philip assembled his Macedonian empire and gave it to his son, to be later combined with the Persian subassembly and others into Alexander’s greater system. On Alexander’s death his empire did not crumble to dust but fragmented into some of the major subsystems that had composed it.
Hopefully the application of this framework is pretty clear (and also instructive) in every day business life. Interestingly, Simon’s theories were the ultimate inspiration for a management fad we saw burn bright (and flame out) just a few years ago: Holacracy, the fluid organizational structure made up of self-organizing teams. Invented by Brian Robertson and made famous by Tony Hsieh and Zappos, the method (it’s a registered trademark) is based on ideas about “holons” from Hungarian author and journalist Arthur Koestler. In his 1967 book The Ghost in the Machine, Koestler repeats Simon’s story of Tempus and Hora and then goes on to theorize that holons (a name he coined “from the Greek holos—whole, with the suffix on (cf. neutron, proton) suggesting a particle or part”) are “meant to supply the missing link between atomism and holism, and to supplant the dualistic way of thinking in terms of ‘parts’ and ‘wholes,’ which is so deeply engrained in our mental habits, by a multi-levelled, stratified approach. A hierarchically-organized whole cannot be “reduced” to its elementary parts; but it can be ‘dissected’ into its constituent branches of holons, represented by the nodes of the tree-diagram, while the lines connecting the holons stand for channels of communication, control or transportation, as the case may be.”
Holacracy aside, there’s a ton of goodness in the parable and the architecture of modularity that it posits as critical. It’s not an accident that every company is built this way and as we think about those companies designing systems, it’s also not surprising many of those should also follow suit (a good lead-in for Conway’s Law, which is up next). Although I’m pretty out of words at this point, Simon also applies the same hierarchy/modularity concept to problem solving and there’s a pretty good argument to be made that the “latticework of models” Charlier Munger described in his 1994 USC Business School Commencement Address would fit the framework.
- Egidi, Massimo, and Luigi Marengo. “Cognition, institutions, near decomposability: rethinking Herbert Simon’s contribution.” (2002).
- Egidi, Massimo. “Organizational learning, problem solving and the division of labour.” Economics, bounded rationality and the cognitive revolution. Aldershot: Edward Elgar (1992): 148-73.
- Koestler, Arthur, and John R. Smythies. Beyond Reductionism, New Perspectives in the Life Sciences [Proceedings of] the Alpbach Symposium . (1972).
- Koestler, Arthur. “The ghost in the machine.” (1967).
- Radner, Roy. “Hierarchy: The economics of managing.” Journal of economic literature 30.3 (1992): 1382-1415.
- Simon, Herbert A. “Near decomposability and the speed of evolution.” Industrial and corporate change 11.3 (2002): 587-599.
- Simon, Herbert A. “The Architecture of Complexity.” Proceedings of the American Philosophical Society 106.6 (1962): 467-482.
- Simon, Herbert A. “The science of design: Creating the artificial.” Design Issues (1988): 67-82.
- Simon, Herbert A. The sciences of the artificial. MIT press, 1996.
Framework of the Day posts:
As some of you may know I’ve been collecting mental models and working on a book for a little while now (it’s been going pretty slow since my daughter was born in January). This is more notes than chapter, but I still thought it was worth sharing. If you like this I’m happy to do more in the future (I wrote about the pace layers framework in my last post). Oh, and if you haven’t already, sign up to get my new blog posts by email, it’s the best way to keep up.
By all accounts Donald Rumsfeld was a man who didn’t suffer from a shortage of self-confidence. Whether it was Meet the Press, Errol Morris’s documentary Unknown Known (it’s also worth reading the four-part series Morris wrote on Rumsfeld/the documentary for the New York Times), or a grilling from Jon Stewart on the Daily Show, he always seemed supremely satisfied with his own certainty. Which must have made the public response to what’s become his most famous comment all the more vexing. At a Department of Defense briefing in February, 2002, then Secretary of Defense Rumsfeld was asked about evidence to support claims of Iraq helping to supply terrorist organizations with weapons of mass destruction. “Because,” the questioner explained, “there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.”
Rumsfeld famously replied:
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.
While it’s a mouthful and the context shouldn’t be lost, there’s a useful framework buried in Rumsfeld’s dodge. It looks something like this:
(Give the whole article from the Project Management Institute on how to apply known unknowns to project management a read.)
Rumsfeld went on to title his memoir Known and Unknown, and explained his perspective on its meaning early in the book:
At first glance, the logic may seem obscure. But behind the enigmatic language is a simple truth about knowledge: There are many things of which we are completely unaware—in fact, there are things of which we are so unaware, we don’t even know we are unaware of them. Known knowns are facts, rules, and laws that we know with certainty. We know, for example, that gravity is what makes an object fall to the ground. Known unknowns are gaps in our knowledge, but they are gaps that we know exist. We know, for example, that we don’t know the exact extent of Iran’s nuclear weapons program. If we ask the right questions we can potentially fill this gap in our knowledge, eventually making it a known known. The category of unknown unknowns is the most difficult to grasp. They are gaps in our knowledge, but gaps that we don’t know exist. Genuine surprises tend to arise out of this category. Nineteen hijackers using commercial airliners as guided missiles to incinerate three thousand men, women, and children was perhaps the most horrific single unknown unknown America has experienced.
Rumsfeld was obsessed with Pearl Harbor. In his memoir he quotes a foreword written by game theorist/nuclear strategist Thomas Schelling that introduced a book about the attack by Roberta Wohlstetter. Schelling wrote (emphasis mine):
If we think of the entire U.S. government and its far-flung military and diplomatic establishment, it is not true that we were caught napping at the time of Pearl Harbor. Rarely has a government been more expectant. We just expected wrong. And it was not our warning that was most at fault, but our strategic analysis. We were so busy thinking through some “obvious” Japanese moves that we neglected to hedge against the choice that they actually made.
And it was an “improbable” choice; had we escaped surprise, we might still have been mildly astonished. (Had we not provided the target, though, the attack would have been called off.) But it was not all that improbable. If Pearl Harbor was a long shot for the Japanese, so was war with the United States; assuming the decision on war, the attack hardly appears reckless. There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.
In other words, unknown unknowns.
Outside of politics, the framework is a useful way to categorize risk/uncertainty in life or business. I got interested and dug around a bit to find the historical context for the idea, which led me in a few different directions.
Rumsfeld credits William R. Graham at NASA with first introducing him to the concept in the late-90s, though it turns out to go back a lot further than that. The oldest reference I could find comes from a 1968 issue of the Armed Forces Journal International. In the article “The ‘Known Unknowns’ And The ‘Unknown Unknowns'” about the procurement of new weapons. The article opens like this:
Cheyenne was the first major Army weapon to be developed under DoD’s sometimes controversial contract definition procedures. General Bunker put the process in perspective by pointing out that no procedural system can entirely eliminate “surprises” from happening during development of a complex weapons system, and that contract definition wasn’t expected to. “But,” he pointed out, “there are two kinds of technical problems: there are the known unknowns, and the unknown unknowns. Contract definition has helped eliminate the known unknowns. It cannot eliminate completely potential cost overruns, because these are due largely to the unknown unknowns.”
The term pops up throughout the 70s in relation to military procurement. Sometime in there some folks also start using the term “unk-unks” to refer to the most dangerous of the four boxes. Here it is in context from a 1982 New Yorker piece on the airplane industry:
The excitement of this business lies in the sweep of the uncertainties. Matters as basic as the cost of the product — the airplane — and its break-even point are obscure because so much else is uncertain or unclear. The fragility of the airline industry does, of course, create uncertainties about the size and the reliability of the market for a new airplane or a new variant of an existing airplane. Then, there is a wide range of unknowns, for which an arbitrarily fixed amount of money must be set aside in the development budget. Some of these are so-called known unknowns; others are thought of as unknown unknowns and are called “unk-unks.” The assumption is that normal improvements in an airplane program or an engine program will create problems of a familiar kind that add to the costs; these are the known unknowns. The term “unk-unks” is used to cover less predictable contingencies; the assumption is that any new airplane or engine intended to advance the state of the art will harbor surprises in the form of problems that are wholly unforeseen, and perhaps even novel, and these must be taken account of in the budget.
Some are even trying to use it as a kind of code word for breakthrough innovations.
Finally, although it’s not clear they’re connected, there’s a very similar framework from psychologists Joseph Luft and Harrington Ingham from 1955 called the Johari Window. The model attempts to visualize the effects of our knowledge of self and how that works in relation to the knowledge of others:
Quadrant I, the area of free activity, refers to behavior and motivation known to self and known to others.
Quadrant II, the blind area, where others can see things in ourselves of which we are unaware.
Quadrant III, the avoided or hidden area, represents things we know but do not reveal to others (e.g, a hidden agenda or matters about which we have sensitive feelings)
Quadrant IV, area of unknown activity. Neither the individual nor others are aware of certain behaviors or motives: Yet we can assume their existence because eventually some of these things become known, and it motives were influencing relationships all along.
Despite the context for the original quote, the idea is a useful way to think about strategy and understand the various risks you might face.
- Andrews, Walter. (1968). The “Known Uknnowns” And The “Unknown Unknowns”. Armed Forces Journal, p. 14-15.
- BBC NEWS | Magazine | What we know about ‘unknown unknowns’. (2018). News.bbc.co.uk. Retrieved 28 September 2018, from http://news.bbc.co.uk/2/hi/uk_news/magazine/7121136.stm
- Defense.gov Transcript: DoD News Briefing – Secretary Rumsfeld and Gen. Myers . (2002). Archive.defense.gov. Retrieved 28 September 2018, from http://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636
- Graham, D. (2014). Rumsfeld’s Knowns and Unknowns: The Intellectual History of a Quip. The Atlantic. Retrieved 28 September 2018, from https://www.theatlantic.com/politics/archive/2014/03/rumsfelds-knowns-and-unknowns-the-intellectual-history-of-a-quip/359719/
- Hevesi, D. (2015). Roberta Wohlstetter, 94, Military Policy Analyst, Dies. Nytimes.com. Retrieved 28 September 2018, from https://www.nytimes.com/2007/01/11/obituaries/11wohlstetter.html
- Kim, S. D. (2012). Characterizing unknown unknowns. Paper presented at PMI® Global Congress 2012—North America, Vancouver, British Columbia, Canada. Newtown Square, PA: Project Management Institute.
- Kirkpatrick, L.B. Book review of Pearl Harbor: Warning and Decision by Roberta Wohlstetter — Central Intelligence Agency. (1993). Cia.gov. Retrieved 28 September 2018, from https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol7no3/html/v07i3a13p_0001.htm
- Grimes, W. (2016). Thomas C. Schelling, Master Theorist of Nuclear Strategy, Dies at 95. Nytimes.com. Retrieved 28 September 2018, from https://www.nytimes.com/2016/12/13/business/economy/thomas-schelling-dead-nobel-laureate.html
- Luft, Joseph, and Harry Ingham. “The johari window.” Human Relations Training News 5.1 (1961): 6-7.
- Morris, E. (2014). The Certainty of Donald Rumsfeld (Part 1). Opinionator. Retrieved 28 September 2018, from https://opinionator.blogs.nytimes.com/2014/03/25/the-certainty-of-donald-rumsfeld-part-1/
- Morris, E. (2014). The Certainty of Donald Rumsfeld (Part 2). Opinionator. Retrieved 28 September 2018, from https://opinionator.blogs.nytimes.com/2014/03/26/the-certainty-of-donald-rumsfeld-part-2/
- Morris, E. (2014). The Certainty of Donald Rumsfeld (Part 3). Opinionator. Retrieved 28 September 2018, from https://opinionator.blogs.nytimes.com/2014/03/27/the-certainty-of-donald-rumsfeld-part-3/
- Morris, E. (2014). The Certainty of Donald Rumsfeld (Part 4). Opinionator. Retrieved 28 September 2018, from https://opinionator.blogs.nytimes.com/2014/03/28/the-certainty-of-donald-rumsfeld-part-4/
- Mullins, J. W. (2007). Discovering” Unk-Unks”. MIT Sloan Management Review, 48(4), 17.
- Newhouse, J. (1982). A sporty game: I. betting the Company. The New Yorker, 48-105.
- Ramasesh, R. V., & Browning, T. R. (2014). A conceptual framework for tackling knowable unknown unknowns in project management. Journal of Operations Management, 32(4), 190-204.
- Rumsfeld, D. (2011). Known and unknown: a memoir. Penguin.
- Schelling, Thomas C. “Meteors, Mischief, and War.” Bulletin of the Atomic Scientists 16.7 (1960): 292-300.
- Steyn, M. (2003). Rummy speaks the truth, not gobbledygook. Telegraph.co.uk. Retrieved 28 September 2018, from https://www.telegraph.co.uk/comment/personal-view/3599959/Rummy-speaks-the-truth-not-gobbledygook.html
- Wilson, George C. (1969). The Washington Post, p. A1.
- Wohlstetter, Roberta. Pearl Harbor: warning and decision. Stanford University Press, 1962.
- Wright, Robert A. (1970). Lockheed’s Illness Is Contagious. New York Times.
Framework of the Day posts:
As some of you may know I’ve been collecting mental models and working on a book for a little while now (it’s been going pretty slow since my daughter was born in January). This is more notes than chapter, but I still thought it was worth sharing. If you like this I’m happy to do more in the future. Oh, and if you haven’t already, sign up to get my new blog posts by email, it’s the best way to keep up.
This one comes from Stewart Brand and is a way to explain the different speed various layers of society moves. The outer layer, fashion, is the quickest, while the innermost layer, nature, moves most slowly. Each layer interacts with one another as inventions and ideas get digested. As Brand explains:
The job of fashion and art is to be froth—quick, irrelevant, engaging, self-preoccupied, and cruel. Try this! No, no, try this! It is culture cut free to experiment as creatively and irresponsibly as the society can bear. From all that variety comes driving energy for commerce (the annual model change in automobiles) and the occasional good idea or practice that sifts down to improve deeper levels, such as governance becoming responsive to opinion polls, or culture gradually accepting “multiculturalism” as structure instead of just entertainment.
Brand’s inspiration for the framework came from an architect named Frank Duffy who encouraged builders not to think of a building as a single entity, but as a set of layers operating at different timescales. Duffy included four timescales: Shell, services, scenery, and sets (represented below).
Brand picked up on Duffy’s work and adapted it to a kind of proto-pace layer framework in his 1994 book How Buildings Learn: What Happens After They’re Built, expanding it to six S’s and including this handy diagram:
Brand eventually adapted that into the pace layer framework at the top in his 2008 book The Clock of the Long Now: Time and Responsibility (the chapter on pace layers was edited and republished last year in MIT’s Journal of Design and Science). If you want more, here’s a great writeup from Eric Nehrlich on a conversation about pace layers between Brand and Paul Saffo. Nehrlich calls out this slide from the presentation, which is quite helpful for understanding how the layers work:
(The whole talk is posted at the Long Now Blog if you’re so inclined.)
The framework has been picked up and adapted by many, but one of the more notable versions for me comes from Gartner as a way to think about your enterprise software strategy. They break enterprise software into three “layers”:
– Systems of Record — Established packaged applications or legacy homegrown systems that support core transaction processing and manage the organization’s critical master data. The rate of change is low, because the processes are well-established and common to most organizations, and often are subject to regulatory requirements.
– Systems of Differentiation — Applications that enable unique company processes or industry-specific capabilities. They have a medium life cycle (one to three years), but need to be reconfigured frequently to accommodate changing business practices or customer requirements.
– Systems of Innovation — New applications that are built on an ad hoc basis to address new business requirements or opportunities. These are typically short life cycle projects (zero to 12 months) using departmental or outside resources and consumer-grade technologies.
Each layer has it’s own pace of change, lifetime, planning horizon, governance model, and many other unique differentiators:
All in all, the overarching shearing/pace layers framework (many layers which interact with each other and operate at different speeds) is something I’ve found useful in various spheres in addition to the society, architecture, and enterprise software examples above. Inside a company, for instance, you conduct various activities that exist in a similar set of layers ranging from long-term planning and brand building to quarterly goals or roadmaps to two week sprints to weekly exec meetings and then the daily work. It’s a useful way to spot where you’re overloaded with meetings (too many weekly check-ins, not enough monthly lookbacks) or understand where you’re falling down (not doing a good enough job translating the medium term to the long term).
- Brand, Stewart. The clock of the long now: Time and responsibility. Basic Books, 2008.
- Brand, Stewart. How buildings learn: What happens after they’re built. Penguin, 1995.
- Brand, S. (2018). Pace Layering: How Complex Systems Learn and Keep Learning. Journal of Design and Science. https://doi.org/10.21428/7f2e5f08
- Duffy, Francis. “Measuring building performance.” Facilities 8.5 (1990): 17-20.
- Gartner.com. (2012). Gartner Says Adopting a Pace-Layered Application Strategy Can Accelerate Innovation. [online] Available at: https://www.gartner.com/newsroom/id/1923014 [Accessed 28 Sep. 2018].
- Mesaglio, Mary & Matthew Hotle. “Pace-Layered Application Strategy and IT Organizational Design: How to Structure the Application Team for Success.” Gartner, 2016.
- Nehrlich, E. (2015). Stewart Brand and Paul Saffo at the Interval. [online] Nehrlich.com. Available at: http://www.nehrlich.com/blog/2015/02/11/stewart-brand-and-paul-saffo-at-the-interval/ [Accessed 28 Sep. 2018].
Framework of the Day posts:
Been awhile since I got one of these Remainders posts out. For the uninitiated, it’s a chance to share some of what I’ve been reading/seeing. You can find past versions filed under Remainders. Also, if you want to subscribe to the email so you actually find out when things are published here (on the rare occasion they are), please sign up here.
Alright, let’s start with books. Since last time I’ve read:
- China’s Economy: What Everyone Needs to Know (Arthur Kroeber): Long and probably way more detail than I needed, but offered an interesting glimpse into how China became the country it is. Definitely start with this podcast before you decide to read the book.
- Free-Range Chickens (Simon Rich): Simon Rich is funny and I needed a break after the China book. This is an hour or two of reading. You can also just start by checking out his humor writing in the New Yorker (go with Sell Out first).
- Jennifer Government & Lexicon (Max Barry): Two sci-fi(ish) novels by Max Barry. Jennifer Government is about warring loyalty programs and Lexicon is about mind-controlling words. The latter is better. Fun and easy.
- Men Explain Things to Me (Rebecca Solnit): Given everything that’s happened with #MeToo over the last year, it’s fascinating to go back and read this as it foretells a lot of what we’ve seen. Also, Rebecca Solnit has become a must-read for me and I’m looking forward to digging through more of her work.
- Born Standing Up (Steve Martin): Steve Martin talking about his life as a comedian (I did the audiobook for this one, which he narrates).
- E=mc2: A Biography of the World’s Most Famous Equation (David Bodanis): This was probably my favorite of the bunch. Sounds dry, but it’s a fascinating account of an equation I didn’t really understand. Takes you through in a step-by-step manner (it literally starts with “e” and then “=” and so on).
- Thinking in Systems (Donella Meadows): I’ve read most of this once before, but I thought I could use a refresher. This is a foundational text in systems thinking and is actually easier to read than it first seems.
- Bad Blood (John Carreyrou): The story of Theranos. Couldn’t put this down once I started.
Now onto the links.
Speaking of Donella Meadows and systems thinking, you can find lots of her work at the Academy for Systems Change site. Check out her writing on leverage points especially. Also, here’s her iceberg model:
This New Yorker story by Patrick Radden Keefe on a Dutch woman who testified against her mobster brother is amazing. Her book, which was a bestseller in the Netherlands, just came out in English this week (I thought it was coming out later this month … guess I know what I’m reading next).
Despite it’s $120+ billion market cap, Adobe is mentioned shockingly infrequently amongst the top software companies in the world. This piece by Blair Reeves does a lot to tell the story of how the company has achieved what it has.
For a few years now I’ve had a personal policy to try to give people on the street asking for money something if I’ve got it. This article makes a good case that it’s worth doing:
Much more research exists on giving cash to the poor in developing countries. Jeremy Shapiro examines the effects of giving money to people in need through his work as a co-founder of GiveDirectly and as a researcher with the Busara Center for Behavioral Economics. At GiveDirectly—a nonprofit that, as its name suggests, offers cash with no strings attached—he worked on a study in Kenya; between 2011 and 2013, the researchers determined, the program improved people’s food security, allowed them to buy other crucial goods (from soap to school supplies), and was beneficial to their psychological well being. Counter to my childhood lesson, recipients didn’t spend any more than they had in the past on so-called temptation goods like alcohol and tobacco. “The takeaway is surprisingly unsurprising—when you give money to poor people good things happen,” Shapiro said. “People eat more, they invest in businesses; you see people reporting being happier and less stressed out.”
Ray Lewis got inducted into the NFL Hall of Fame this week. 18 years ago he had some involvement in a murder. How he’s avoided talking about it and come to be revered is a story in and of itself.
- The origin of inches and centimeters from E=mc2: “The conversion factors seem arbitrary, but that’s because they link measurement systems that evolved separately. Inches, for example, began in medieval England, and were based on the size of the human thumb. Thumbs are excellent portable measuring tools, since even the poorest individuals could count on regularly carrying them along to market. Centimeters, however, were popularized centuries later, during the French Revolution, and are defined as one billionth of the distance from the equator to the North Pole, passing by Paris. It’s no wonder the two systems don’t fit together smoothly.”
- Surfers in cold water can develop a condition called surfer’s ear, a condition where additional bone grows in the ear canal, blocking hearing and making them more susceptible to ear infections.
- The cells that helped with finding a polio vaccine (amongst many other things) were taken from an African-American woman in 1951 and the family is only now getting some control over the widespread use of their genomic data.
- Phantom Kangaroo is a report of kangaroos or wallabies in places where there are none.
This is a good visualization of roster turnover in this year’s NBA offseason:
Really amazing piece by Guardian writer Hannah Jane Parkinson on her struggle with bipolar disorder.
This story about a fungi that drugs host insects with psilocybin has everything. This bit is my favorite:
And at some point during this work, it dawned on Kasson that he was working with illicit substances. Psilocybin, in particular, is a Schedule I drug, and researchers who study it need a permit from the Drug Enforcement Administration. “I thought: Oh, crap,” he says. “Then I thought: OH CRAP. The DEA is going to come in here, tase me, and confiscate my flying saltshakers.”
The article that eventually ended up with Elon Musk calling one of the Thai cave rescuers a pedo is worth reading. It makes a case for specialization that’s interesting:
The Silicon Valley model for doing things is a mix of can-do optimism, a faith that expertise in one domain can be transferred seamlessly to another and a preference for rapid, flashy, high-profile action. But what got the kids and their coach out of the cave was a different model: a slower, more methodical, more narrowly specialized approach to problems, one that has turned many risky enterprises into safe endeavors — commercial airline travel, for example, or rock climbing, both of which have extensive protocols and safety procedures that have taken years to develop.
I love these pieces by the artist 1010. Here’s one:
Last, but not least, who doesn’t want to read a profile of sports-mouth(???) Stephen A. Smith? Also, if you haven’t already , go and read The Awl on Stephen A., which includes the canonical Stephen A. Smith parody tweet (for those that haven’t heard him before, he has an uncanny ability to get himself into a frenzy about anything and a willingness to always take the other side):
Thanks for reading. Please let me know if I missed anything, feel free to share with others, and subscribe to the email if you haven’t already. Thanks!
Source: Your Politics Are Indicative Of Which Sports You Like, Business Insider
On July 1, 2016 DeMar DeRozan signed a 5-year contract with the NBA’s Toronto Raptors for somewhere in the range of $145 million. Last week he was traded for Kawhi Leonard, the NBA’s best perimeter defender and one of the five best players in the league when he’s healthy (which he wasn’t all of last year).
DeRozan was pretty upset about it. He may or may not have been told he wouldn’t be traded and, either way, it seems clear he wanted to stay in Toronto. They drafted him when he was 20 years old and he had hopes of retiring with the team, ideally becoming the greatest Raptor in franchise history (a feat made easier by the fact that nearly every great Raptor has escaped Toronto at the first opportunity).
So far this is all just NBA news you either don’t care about or already know. So why write about it? Because I can’t deal with the “this guy shouldn’t complain, he is making almost $30 million a year” conversation. Of course there were variations, but the gist is that because you’re an athlete and you (deservedly) get paid lots of money you a) shouldn’t have, or b) shouldn’t voice, regular human feelings (this is, in case you haven’t noticed, a stance also shared by our president).
Since I read Alan Jacob’s book How to Think, I’ve had his answer to this question (or a version of it at least) rattling around in my head:
But that’s because Gladwell [in his Revisionist History podcast episode asking why Wilt Chamberlain didn’t shoot underhand free throws], like many of us, seems to have unwittingly internalized the idea that when professional athletes do the thing they’re paid to do, they’re not acting according to the workaday necessity (like the rest of us) but rather are expressing with grace and energy their inmost competitive instincts, and doing so in a way that gives them delight. We need to believe that because much of our delight in watching them derives from our belief in their delight. (In much the same way we enjoy watching the flight of birds, especially big birds of prey, associating such flying with freedom even though birds actually fly from necessity: they need to eat. And yet we have no interest in watching members of our own species drive to McDonald’s.)
That’s nearly perfectly expressed. We need to believe in their delight because of our delight and we can’t stand to think we care more about winning or losing than they do. Or, as my friend Jeff at DaBearsBlog put it, “fans think it should be honor to play pro sports because they all wish they could.” The thing here is it’s still just a job. If we zoom ourselves out for a minute and replace athlete with employee and professional sports league with desk job, we start to see things more clearly. If you hold a senior role at a company there are many in the organization who feel the same way about you as you feel about athletes: That you have it easy and if they could just be in your position everything would be right in the world. Of course you don’t and it wouldn’t.
What’s more, while some of us may have found a way to practice our passion at work (I feel pretty lucky in that regard much of the time), it’s only natural to have moments where we don’t feel like doing the things required of us or can’t find the excitement we know is there somewhere. It’s a normal part of doing the same thing everyday, which is what it means to be a professional at anything. (An interesting analog is the surprising number of startup founders I’ve met who are completely clinical about the industry in which they start their companies. They don’t need to care about the space as long as it offers the right market conditions.)
Getting back to the start, there are two main things people say about professional athletes that get under my skin: They’re rich so they should get over it and they should have known when they signed a contract. Let’s take these one at a time.
They’re rich so they should get over. While it’s true they make an unbelievable amount of money, that can create a whole new set of things to deal with that many of us can’t imagine. There’s lots of documentation that suggests after a certain point money stops making you happier. What’s more, they surely will eventually get over it, but sometimes that takes some time (try to remember back to how effective it was when someone told you “you’d get over it” after what felt like a momentous breakup). DeMarcus Cousins, a superstar NBA player who has earned around $80 million in his career but was forced to take a low-money short-term contract this season after getting hurt put this perfectly recently. When asked whether he was nervous through the free agency process, Cousins answered, “Have you ever been unemployed? Were you nervous then? Alright, that answers the question.”
They should have known when then they signed a contract. Here, again, it’s easy to turn back to all of our regular job experience. Most of us in America are at-will employees, meaning we can be fired at any time for essentially any reason. Despite the fact that we all sign an at-will employment agreement, people are frequently shocked when they’re fired, whether there is good reason or not. Do we wonder why they’re so surprised? Of course not. Sure DeRozan wasn’t fired, but he can still be surprised and sad and frustrated that it happened.
Last, but not least, there’s a much bigger story here about professional sports, money, power, and race. The NBA is a very progressive league, but even there you can’t get away from fan loyalty sitting with teams instead of players. And I’m not advocating it should, that’s the fun of watching sports: You live and die with your squad (there’s a famous Seinfeld joke about rooting for laundry). However, we can enjoy the game and our teams while not questioning the humanity of the athletes who make the whole thing possible. The NBA has made huge strides in becoming a player-centric league, but fan conversations are still lagging behind.
« Older posts