I’ve been experimenting with a daily email with Colin Nagy called Why Is This Interesting? This is from today’s edition. If you’re interested in checking it out, drop me a line (I’ll post something here when we launch in publicly).
Because this is something we’ve been worried about forever (literally). In Phaedrus, Plato worried about roughly the same thing as it related to writing: “If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder.”
The reality is that all technology affects culture in expected and unexpected ways. “We shape our tools and thereafter our tools shape us” is one of my favorite aphorisms (misattributed to McLuhan). The irony, of course, is that the complaints in this article are perfectly expected. We come to rely on automation because it’s mostly better. In fact, the strangest part of the whole piece is the way the evidence of backup camera safety is presented. “Between 2008 and 2011,” the author writes, “the percentage of new cars sold with backup cameras doubled, but the backup fatality rate declined by less than a third while backup injuries dropped only 8 percent.” I think the implication is that those numbers aren’t all that impressive, but a 20 or 30 percent drop in backup fatalities seems pretty excellent to me.
The Times piece is effectively an explorations of McLuhan’s four effects. The backup camera enhances our senses by giving us eyes in the back of our heads, obsolescing the car’s mirrors, and retrieving a time when cars were smaller, but, as the article points out, when pushed to its extreme it reverses our own role as driver, giving control entirely over the tech. While the points are valid, we should be less surprised that this keeps happening and try to keep things in perspective.
I’ve been experimenting with a daily email with Colin Nagy called Why Is This Interesting? This is from today’s edition. If you’re interested in checking it out, drop me a line (I’ll post something here when we launch in publicly).
If you’ve spent any time working in the age of e-mail (nevermind Slack) you’ve encountered this challenge. One of the things I shared with everyone who started at Percolate for a long time was this post fromY-Combinator founder Paul Graham about the schedule a “maker” keeps vs a manager. The point is that the manager has their days broken into tiny bits, 30 minute or one hour meetings, while the maker needs long uninterrupted focus time to do their work. When the manager forces the maker into their schedule they are surprised that the work can’t get done.
One way to think about this divide is as something computer scientists callexploration vs exploitation. The manager is an explorer, looking at information across many different areas, while the maker is an exploiter, using that information to go deep in just one. It’s a little like the story from the Greek poet Archilochus about the fox and the hedgehog: “the fox knows many things, but the hedgehog knows one big thing” (if you’re not familiar with the parable,here’s a good primer from NPR).
As Brian Christian points out in his bookAlgorithms to Live By: The Computer Science of Human Decisions, there’s actually tons of interesting stuff that lives in this tension. Do you go to the restaurant you like or try the new one that just opened? Should you load up an old favorite on Spotify or see what they’ve chosen for you this week? The answer, as we have all figured out and computer science has proven, is it depends. To figure out the best approach you’ve also got to know the time limit. In simplified terms, if you have lots of time left, exploration makes sense, if you’re approaching deadline, exploitation is optimal.
If you’re not a regular reader of my site, please do me a favor and subscribe to the email (it comes infrequently — whenever I add a post here). I write about business, technology, history, mental models (a lot of those), and all the random interesting stuff I’m reading about. It’s a hodge-podge and I hope you’ll enjoy.
At the beginning of last year I decided I was going to spend more time reading books in 2018. I set myself a goal of 30 on Goodreads and blew past that by year end. While it felt good and something I’m looking to reproduce in 2019, it left me wondering whether I’d actually read enough articles to put together my favorite longform list.
As I was wondering this, a series of plagues befell my house and knocked me off my feet (and computer) for just about two full weeks (it was not fun). When I was finally feeling better this week I thought I’d at least see how many articles I favorited in Instapaper and try to figure out whether a list was feasible. Sixty-something links later I realized I read enough to put it together, so here it is. Some usual caveats apply:
This is a list of my favorites. It’s not meant to be conclusive and I know I missed lots of great stuff (especially this year with my focus on books).
You’ll notice a concentration of articles from The New Yorker and New York Times Magazine. That’s because I get both of those delivered. Again, this isn’t meant to be the definitive list of best articles of the year (if you want that I’d head over to longform.org).
The way I put this together is first I just list out all the articles I favorited in Instapaper. You can find those at the bottom of this post and also follow them (and favorites from YouTube) at my @heyitsinstafavs automated Twitter account. After I list them all out I just look over the list again and pull out all the ones I specifically remember with the thought that those were the ones that had the biggest impact on me.
Everything is categorized and my picks are in bold throughout. The full list of favorites is at the bottom.
Finally, if you get through this whole list and want more, here are my past versions:
If you’ve ever read one of these lists before you’ll know that I love David Grann. Basically anything he writes automatically makes the cut. In 2012 that was his story about an American who fought in the Cuban revolution titled “The Yankee Comandante” and in 2011 it was “A Murder Foretold”. After a few years off to publish his excellent book Killers of the Flower Moon, he was back in the New Yorker in February with “White Darkness”, an amazing story of a solitary journey across Antarctica. (It’s also out as a book now, though I think it’s just the article with some more pictures.) What’s amazing about Grann’s writing obviously starts with the stories he finds, but as you read you realize it’s more about the characters that make up those stories. Somehow he always seems to discover people who both live amazing adventures and are also poets, or something close to it. (As an aside, if you haven’t read Grann’s book of essays The Devil and Sherlock Holmes do yourself a favor and get on that or at least pick a few from his New Yorker profile page.)
I’ve got three entrants for this one and they’re all pretty different.
The first is all about parenting. I would have sworn “The diabolical genius of the baby advice industry” came out last year, but apparently it was from the beginning of this one. It makes sense I wouldn’t have much sense of time, though, since it came out right around when my second daughter was born. The article plainly spells out how big of an industry parenting advice is and how much its foundation is built on bullshit. The bit I remember best is this aside about how statistically insignificant most baby advice really is:
“(Parenting experts who are childless, such as the “queen of routine” Gina Ford, author of the unavoidable Contented Little Baby series, attract a lot of sharp words for it, but this seems unfair. Where Ford has direct experience of parenting none of the 130 million babies born on Earth each year, most gurus only have direct experience of parenting two or three babies, which isn’t much better as a sample size. The assumption that whatever worked for you will probably work for everyone, which is endemic in the self-help world, reaches an extreme in the pages of baby books.)”
The other two are a lot more serious. First is an amazing piece from Guardian writer Hannah Jane Parkinson about her own struggle with mental illness (and specifically bipolar disorder). Parkinson is a very good writer and there’s something about reading an accomplished journalist using her work to explain why her work is so hard that’s particularly impactful. The article’s title “It’s nothing like a broken leg” comes from this passage:
In the last few years I have lost count of the times mental illness has been compared to a broken leg. Mental illness is nothing like a broken leg.
In fairness, I have never broken my leg. Maybe having a broken leg does cause you to lash out at friends, undergo a sudden, terrifying shift in politics and personality, or lead to time slipping away like a Dali clock. Maybe a broken leg makes you doubt what you see in the mirror, or makes you high enough to mistake car bonnets for stepping stones (difficult, with a broken leg) and a thousand other things.
Finally, my pick for this category comes from a New York Times Magazine story titled “Why America’s Black Mothers and Babies Are in a Life-or-Death Crisis.” The article shocked and saddened me as it spelled out just how inadequate the care pregnant black mothers receive. Education and income, as the article explains, don’t explain it. “In fact, a black woman with an advanced degree is more likely to lose her baby than a white woman with less than an eighth-grade education.” Most shocking to me was this story about how the institutional racism embedded in the system manifests itself in obviously bad science amongst doctors:
In 2016, a study by researchers at the University of Virginia examined why African-American patients receive inadequate treatment for pain not only compared with white patients but also relative to World Health Organization guidelines. The study found that white medical students and residents often believed incorrect and sometimes “fantastical” biological fallacies about racial differences in patients. For example, many thought, falsely, that blacks have less-sensitive nerve endings than whites, that black people’s blood coagulates more quickly and that black skin is thicker than white. For these assumptions, researchers blamed not individual prejudice but deeply ingrained unconscious stereotypes about people of color, as well as physicians’ difficulty in empathizing with patients whose experiences differ from their own. In specific research regarding childbirth, the Listening to Mothers Survey III found that one in five black and Hispanic women reported poor treatment from hospital staff because of race, ethnicity, cultural background or language, compared with 8 percent of white mothers.
Go read the whole thing and when you’re done please donate to the Birthmark Doula Collective who are trying to help change the care black mothers receive.
As you may or may not know I’m a big NBA fan. The league is as good as its ever been and has been fundamentally transforming the way the game is played for the last ten years or so. I’ve spent a fair amount of time trying to explain this to friends and now, thanks to Kevin Arnovitz and Kevin Pelton, I can just send them the article “How the NBA got its groove back.” In the piece, the Kevins spell out how much faster the league got over the last decade, starting with the D’Antoni/Nash. In fact, as the article explains, those 2004-05 Suns wouldn’t even be considered a fast team by today’s standards. “Back then, Phoenix’s 98.6 pace was more than a possession per game faster than that of any other NBA team. In 2017-18, the average team had 99.6 possessions per 48 minutes, and the Suns’ 2004-05 pace would have ranked 19th in the league.” It’s a fun time to be an NBA fan. (Honorable mention sports story has to go to The Ringer’s insane “The Curious Case of Bryan Colangelo and the Secret Twitter Account”.)
Society isn’t a perfect title for this category, but it gets at it. I’ve got two pieces here. The first comes from psychology professor Alison Gopnik (who shows up in the podcast list as well). Although not terribly wrong, I thought her review of Steven Pinker’s Enlightenment Now offered a really interesting rebuttal to the macro “everything is getting better” story. Gopnik opens the piece by stating her credentials: She’s a scientist, professor, and “card-carrying true believer in liberal Enlightenment values.” But she doesn’t think we can, or should, push aside local needs and values for the global:
The weakness of the book is that it doesn’t seriously consider the second part of the conversation—the human values that the young woman from the small town talks about. Our local, particular connections to just one specific family, community, place, or tradition can seem irrational. Why stay in one town instead of chasing better opportunities? Why feel compelled to sacrifice your own well-being to care for your profoundly disabled child or fragile, dying grandparent, when you would never do the same for a stranger? And yet, psychologically and philosophically, those attachments are as central to human life as the individualist, rationalist, universalist values of classic Enlightenment utilitarianism. If the case for reason, science, humanism, and progress is really going to be convincing—if it’s going to amount to more than preaching to the choir—it will have to speak to a wider spectrum of listeners, a more inclusive conception of flourishing, a broader palette of values.
In some ways there’s a similar theme in my pick for this category. “Pay the Homeless” is all about the local realities. It’s an argument against the idea that giving money to someone asking for it is somehow not good for the system as a whole or that individual specifically:
Yet on the whole, all the evidence, from the statistical to the spiritual, points in one direction: if you can give, you should give. It won’t solve the problems of mass homelessness or impoverishment. But it will improve someone’s life ever so slightly and briefly. “People are in dire straits and raising money for bare necessities,” Jerry Jones, policy director at the Inner City Law Center, told me. They might be trying to collect enough to pay for a room for the night. They might need bus fare or gas to get to an appointment.
There aren’t that many pieces that stood out in the world of politics for me this year. I suspect that’s because I actively avoided them (as opposed to last year). With that said, one writer and two pieces stood out for me this year.
My pick for politics goes to Robert Draper’s New York Times Magazine profile of House leader Nancy Pelosi. We’re about to hear A LOT about Pelosi as she battles Trump and the Republicans over the next two years and, for me at least, my knowledge and understanding of her was surface at best. The profile is pretty unvarnished and paints Pelosi as a pure politician who knows how to operate as well or better than anyone out there. I suspect we’re going to see a lot of stereotypical framing of Pelosi because she’s a woman and this felt like a good foundation to build understanding. I thought this bit about how much of her perception, even amongst Democrats I’d argue, has been shaped by the Republicans was particularly interesting:
Still, Pelosi’s foremost liability is the effectiveness of the attacks against her. In 2010, Republicans spent $65 million attacking Pelosi in ads; the Republican National Committee hung a banner from its headquarters that read FIRE PELOSI. The attacks have often borne more than a tinge of sexism; in 2012, when Pelosi, as minority leader, wielded less power than the Senate’s Democratic majority leader, Harry Reid, Republicans’ negative television ads were seven times as likely to mention Pelosi as Reid, according to the Wesleyan Media Project, which tracks political advertising. The 2010 onslaught took its toll on Pelosi’s public standing — her favorable rating dropped into the 20s — but otherwise did not faze her. She made clear to her caucus members that they should do whatever it took to win, even if it meant publicly distancing themselves from her. “I don’t know anyone in the world with thicker skin, or anyone about whom more callous things have been said, and she just truly doesn’t care,” a former Pelosi staff member told me. “There’s a small constituency she cares about: her members.”
I listened to fewer podcasts this year thanks to the introduction of audiobooks into my media diet. With that said, there were a few that stood out. Rather than specific episodes, though, this year my favorites felt more like shows in their entirety. This might be because I explored fewer new podcasts this year or just because there were a few exceptional short series that came out in 2018.
First off is Reply All. As far as week-after-week quality goes, it’s hard to beat these guys. Two (really three) episodes in particular stood our for me:
“Invcel”: “How a shy, queer Canadian woman accidentally invented one of the internet’s most toxic male communities.”
“The Crime Machine, Part 1” & “The Crime Machine, Part 2”: “New York City cops are in a fight against their own police department. They say it’s under the control of a broken computer system that punishes cops who refuse to engage in racist, corrupt policing. The story of their fight, and the story of the grouchy idealist who originally built the machine they’re fighting.”
Next up is American Fiasco, Roger Bennet’s ten-part series on the disaster that was America’s 1998 World Cup.
Finally, and my real pick, is Rukmini Callimachi’s ten-part series Caliphate. The podcast follows Callimachi as she reports on the Islamic State and the fall of Mosul. It’s an extraordinary piece of reporting with the kinds of twists and turns that we’ve come to expect in great podcasts these days. Again, I can’t say enough about Callimachi’s work this year between Caliphate and The Isis Files.
Finally, because I can’t resist, I read a bunch of longform that was amazing and didn’t come out this year. Although it doesn’t officially fit my rules, I’m going to include a few picks as a way to wrap things up.
I think I read “Promthea Unbound” just after I put together last year’s list otherwise I have to assume it would have made the cut. It’s the extraordinary (sorry, I’m running out of superlatives) story of a child genius and her mom and how they got through life together.
Finally, my pick for “Not This Year” maybe shouldn’t officially even count as longform, but I’m making the rules and I say short stories are allowed. “The Ones Who Walk Away from Omelas” is a short story by Ursula Le Guin that feels as appropriate today (if not more) than it must have when it was published in 1973. It’s about the costs were willing to take on to live happily. I’ll leave it at that so you can enjoy.
I started and stopped this post four times as I tried to find the right way to open. Eventually I got tired of searching and figured it was easiest to just jump off the note I wrote to myself in Google Keep after the idea popped into my head:
That might not make so much sense (yet), but like any good note it captured enough of the concept that I remembered what I was thinking when I wrote it. I jotted it down as I was prepping for a webinar I did last week offering up some predictions for marketing in 2019. I was getting worked up (as I’m wont to do) about how much it bugs me when everyone in marketing talks about AI as if they have any idea what it really means or the implications.1 Someone asked why it bothered me so much and my answer, which kind of just poured out, was that once everyone starts agreeing about something (and saying it endlessly) it becomes less and less meaningful. This is not just some soft definition of the word meaning, though, it literally has less information.
A few months ago I wrote about Claude Shannon and information theory. Shannon wrote a seminal paper in 1948 called “A Mathematical Theory of Communication“. In it he defined the measure of information as, effectively, its unexpectedness (he called it entropy). The more random, the more information. This is precisely what bits measure (you can think of it as the number of yes/no questions it would take to get to the answer). What happens when you compress a photo? You take away the randomness. That’s why otherwise complex surfaces like sky or skin might come to look a bit pixelated: The compression algorithm is constraining the number of hues available in order to bring down the entropy (and therefore the file size) of the whole photo.
What does that mean for marketing buzzwords?
Well, as everyone starts to say the same thing and continue to offer little behind it, it becomes more and more expected and, therefore, starts to carry less and less information. When people layer on top of those buzzwords with real examples or alternative ideas, they return some randomness (and therefore information) to the concept. At their best, marketing contrarians are attempting to breathe some life into words and ideas that have otherwise lost their information content.
I don’t really like to think of myself as a contrarian because I think that often carries with it some notion of being different for the sake of being different (and trolling). Rather, I think if everyone is following one strategy or idea, the value of being the next person to jump on board is incrementally less (especially when that idea is poorly defined/understood). In a way it’s like an anti-network effect.
Back to Hinkie’s letter. It was leaked and provided an amazing view into the psyche of someone who was willing to be a pariah. In it he paints an interesting picture of the connection between contrarianism and traditionalism.
Here he is on contrarianism:
To develop truly contrarian views will require a never-ending thirst for better, more diverse inputs. What player do you think is most undervalued? Get him for your team. What basketball axiom is most likely to be untrue? Take it on and do the opposite. What is the biggest, least valuable time sink for the organization? Stop doing it. Otherwise, it’s a big game of pitty pat, and you’re stuck just hoping for good things to happen, rather than developing a strategy for how to make them happen.
And on traditionalism:
While contrarian views are absolutely necessary to truly deliver, conventional wisdom is still wise. It is generally accepted as the conventional view because it is considered the best we have. Get back on defense. Share the ball. Box out. Run the lanes. Contest a shot. These things are real and have been measured, precisely or not, by thousands of men over decades of trial and error. Hank Iba. Dean Smith. Red Auerbach. Gregg Popovich. The single best place to start is often wherever they left off.
Let’s bring it back to buzzwords.
So basically Hinkie’s argument is that the most appropriate way to be a contrarian is to also be a traditionalist: To be a respectful student of the underlying principles while also constantly probing and questioning whether they still make sense. One of the things that surprises me about the marketing industry is how often people miss this tradeoff. In an attempt to play the contrarian they shun traditional wisdom, but at the same time they repeat empty phrases and approaches at every conference that will let them on stage.
I actually think one of the reasons Byron Sharp’s book How Brands Grow has picked up as much steam as it has is because it strikes a good balance between these things. It’s a contrarian take (loyalty shouldn’t be a goal because it’s an outcome) but at the same time it’s deeply rooted in some traditional marketing ideas (marketshare, reach, and creativity to name three). This is a tough balance to strike, but when someone hits the spot is has the opportunity to really resonate.
Unfortunately, most of the time the industry misses the market by a lot. What we end up with a bunch of anti-historical/anti-intellectual slogans that get repeated ad-infinitum. It’s lots of words and little information.
Here’s the notes I had for the question: “Let me start by saying that I predict in 2019 marketers will continue to talk about AI and ML interchangeably with no idea what the words mean. (I’m particularly salty about this.) I would broadly see we will continue to see ML become more available as different kinds of wrappers are made available that enables folks to use it in more of their everyday work. This seems to be some of what Microsoft and Google are doing with smart integrations into their work suites. In general, my take on AI/ML is it’s a classic case of Amara’s law, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” In the short term, these things aren’t going to be writing copy and, anyway, that’s not that big a deal. In the long term, the promise of ML is data modeling and coding written by computers, not people. That’s definitely not a 2019 prediction, but it’s the road we’re going down.”↑
I recognize that the word/idea transformation belongs in the buzzword bucket, but if you read about Hinkie and what he did I think it’s a fair use of the word with real meaning. He was a heretic who questioned the most fundamental law of professional sports (“you play every game to win”) and rewrote the path to building a championship contender.↑
This being Giving Tuesday, I thought it was appropriate to write up something up about a non-profit I think is worth supporting: Your local library. As always, if you enjoy these posts please sign up for my email to not miss any posts and always feel free to share with friends. Thanks.
At the beginning of the year I decided I was going to try to read more books this year. I don’t remember it was a resolution or what, but I set a goal of thirty and set on my way, tracking everything on Goodreads (which has legitimately become one of my favorite networks over the last eleven months). This post isn’t about my book list, though (I’ll wrap that up at the end of the year), but rather about the library. Most of the books I’ve read this year have been borrowed using Libby, the app offered by Overdrive which manages e-book borrowing for most libraries (including the New York Public and Brooklyn Public, where I belong).
When I’ve told people about my library habits I’ve gotten two reactions: There’s a group who is amazed that you can borrow Kindle books and promises to immediately go out and get themselves a card and there’s another who tells me they tried it, but the borrowing just didn’t work for them. They can never find books they want, they explain, and when they do finally find a good ebook to borrow, they are always on hold. I can’t say much about not finding books you want except that I’ve managed to find lots books this year that were both well worth reading and immediately available. But that’s not the point of this post. I want to talk about book holds and how they’re better thought of as a feature of the library, not a bug. Let me explain.
Back in 2006 I vividly remember reading about behavioral economics for the first time. I had somehow run across an article about it from Harvard Magazine and there was one bit in particular, about how pre-committing to something can help us work against our instinct to take the easy way out, that fascinated me at the time and still rattles around in my brain to this day. The basic idea, now commonly understood with the rising prominence of behavioral economics, is that humans do a very bad job of the value of things in the future. As a result, we are constantly doing things that give us pleasure in the short term and not the long term. In other words, we promise tomorrow’s self it will read that important book, watch that critically acclaimed film, or finally hit the gym while today’s self enjoys that trashy novel, watches another dumb sitcom episode, and drinks a few beers with friends instead of exercising.
But there’s a trick to dealing with our irrationality and it’s called pre-committing. The article offered up an analogy by way of Homer:
The goddess Circe informs Odysseus that his ship will pass the island of the Sirens, whose irresistible singing can lure sailors to steer toward them and onto rocks. The Sirens are a marvelous metaphor for human appetite, both in its seductions and its pitfalls. Circe advises Odysseus to prepare for temptations to come: he must order his crew to stopper their ears with wax, so they cannot hear the Sirens’ songs, but he may hear the Sirens’ beautiful voices without risk if he has his sailors lash him to a mast, and commands them to ignore his pleas for release until they have passed beyond danger. “Odysseus pre-commits himself by doing this,” Laibson explains.
Sometimes, as the analogy goes, we’ve got to bind ourselves to the mast of what’s good for us to actually make it happen. Back in 2006 my favorite example of pre-commitment was Netflix. If you can remember back to its days before life as a streaming service, you put a DVD at the bottom of your queue as you chose it and it slowly moved up the list as you watched and returned movies. The beauty of the system was that it disconnected what you wanted to watch from what you actually watched by splitting them up as two different functions (largely the result of needing to mail out DVDs). What it meant for me was that I watched a bunch of great films I’d always wanted to see because they showed up in my mailbox and I didn’t have another good choice. Instead of just watching another mindless procedural crime drama (not that there’s anything wrong with that), I finally got around to watching films from Alfred Hitchcock, Woody Allen, Orson Welles, and a bunch of other filmmakers that had been permanently relegated to deep depths of my mental movie queue.
So back to the library. If you haven’t borrowed an e-book before it works just like borrowing a physical one: The library has a set number of digital copies and if they’re all out at the moment then you get put on a waitlist. The longest you can borrow a book is 21 days and there are no renewals. That means holds often come through at inopportune times. Sometimes that means skipping the book altogether, but more often I’ve found it was just the push I needed to read something I wanted to read in the past but wouldn’t have necessarily made time for in the present.
Finally, because I can’t resist, if you appreciate the library it’s worth giving a donation if you can afford it. If you think of all the money you spend on Netflix and the like, it’s hopefully not too much of a hardship to offer your local library a few dollars a month. They’d surely appreciate it.
It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
Countdown to Zero Day(Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.
The vast majority of the models I’ve written about were ones that I discovered at one time or another and have adopted for my own knowledge portfolio. The Variance Spectrum, on the other hand, I came up with. Its origin was in trying to answer a question about why there wasn’t a centralized “system of record” for marketing in the same way you would find one in finance (ERP) or sales (CRM). My best answer was that the output of marketing made it particularly difficult to design a system that could satisfy the needs of all its users. Specifically, I felt as though the variance of marketing’s output, the fact that each campaign and piece of content is meant to be different than the one that came before it, made for an environment that at first seemed opposed to the basics of systemization that the rest of a company had come to accept.
To illustrate the idea I plotted a spectrum. The left side represented zero variance, the realm of manufacturing and Six Sigma, and the right was 100 percent variance, where R&D and innovation reign supreme.
While the poles of the spectrum help explain it, it’s what you place in the middle that makes it powerful. For example, we could plot the rest of the departments in a company by the average variance of their output (finance is particularly low since so much of the department’s output is “governed” — quite literally the government sets GAAP accounting standards and mandates specific tax forms). Sales is somewhere in the middle: A pretty good mix of process and methodology plus the “art of the deal”. Marketing, meanwhile, sits off to the right, just behind R&D.
But that’s just the first layer. Like so many parts of an organization (and as described in my essays on both The Parable of Two Watchmakers and Conway’s Law), companies are hierarchical and at any point in the spectrum you can drill in and find a whole new spectrum of activities that range from low variance to high variance. That is, while finance may be “low variance” on average thanks to government standards, forecasting and modeling is most certainly a high variance function: Something that must be imagined in original ways depending on a number of variables include the company, and its products and markets (to name a few). Zooming in on marketing we find a whole new set of processes that can themselves be plotted based on the variance of their output, with governance far to the low variance side and creative development clearly on the other pole. Another way to articulate these differences is that the low variance side represents the routine processes and the right the creative.
While I haven’t seen anyone else plot things quite this way, this idea, that there are fundamentally different kinds of tasks within a company, is not new. Organizational theorists Richard Cyert, Herbert Simon, and Donald Trow, also noted this duality in paper from 1956 called “Observation of a Business Decision“:1
At one extreme we have repetitive, well-defined problems (e.g., quality control or production lot-size problems) involving tangible considerations, to which the economic models that call for finding the best among a set of pre-established alternatives can be applied rather literally. In contrast to these highly programmed and usually rather detailed decisions are problems of a non-repetitive sort, often involving basic long-range questions about the whole strategy of the firm or some part of it, arising initially in a highly unstructured form and requiring a great deal of the kinds of search processes listed above. In this whole continuum, from great specificity and repetition to extreme vagueness and uniqueness, we will call decisions that lie toward the former extreme programmed, and those lying toward the latter end non-programmed. This simple dichotomy is just a shorthand for the range of possibilities we have indicated.
This also introduces an interesting additional way to think about the spectrum: The left side is representative of those ideas where you have the most clarity about the final goal (in manufacturing you know exactly what you want the output to look like when it’s done) and the right the most ambiguity (the goal of R&D is to make something new). For that reason, high variance tasks should also fail far more often than their low variance counterparts: Nine out of ten new product ideas might be a good batting average, but if you are throwing away 90 percent of your manufactured output you’ve massively failed.
It is difficult to deal with the uncertainty of the future, as one must to relate an organization to others in the industry and to events in the economy that may affect it. One must look ahead to determine what forces are at work and to examine the ways in which they will affect the organization. These activities are less structured and more ambiguous than dealing with concrete problems and, therefore, the CEO may have trouble focusing on them. Many experiments show that structured activity drives out unstructured. For example, it is much easier to answer one’s mail than to develop a plan to change the culture of the organization. The implications of change are uncertain and the planning is unstructured. One tends to avoid uncertainty and to concentrate on structured problems for which one can correctly predict the solutions and implications.2
Going a level deeper, another way to cut the left and right sides of the spectrum is based on the most appropriate way to solve the problem. For the routine tasks you want to have a single way of doing things in an attempt to push down the variance of the output while on the high variance side you have much more freedom to try different approaches. In software terms this can be expressed as automation and collaboration respectively.
While this is primarily a framework for thinking about process, there’s a more personal way to think about the variance spectrum as it relates to giving feedback to others. It’s a common occurrence that employees over-or-misinterpret the feedback of more senior members of the team. I experienced this many times myself in my role as CEO. Because words are often taken literally from the leader of a company, an aside about something like color choice in a design comp can be easily misconstrued as an order to change when it wasn’t meant that way. The variance spectrum in that context can be used to make explicit where the feedback falls: Is it a low variance order you expect to be acted on or a high variance comment that is simply your two cents? I found this could help avoid ambiguity and also make it more clear I respected their expertise.
This paper is kind of amazing to read. It feels revolutionary to actually look at how specific decisions come to be made within a company.↑
There’s a whole other really interesting area to explore here that I’m mostly skipping over about using the variance spectrum to help decide types of problems and the mix of work. Although I don’t have a specific model (hence why this is a footnote), the idea that you should decide on your portfolio of activities based on having a good diversity of work across the spectrum is fascinating and seems like a good idea. It’s also in line with a point Herbert Simon makes at the very beginning of his book Administrative Behavior: “Although any practical activity involves both ‘deciding’ and ‘doing,’ it has not commonly been recognized that a theory of administration should be concerned with the processes of decision as well as with the processes of action. This neglect perhaps stems from the notion that decision-making is confined to the formulation of over-all policy. On the contrary, the process of decision does not come to an end when the general purpose of an organization has been determined. The task of ‘deciding’ pervades the entire administrative organization quite as much as does the task of ‘doing’- indeed, it is integrally tied up with the latter. A general theory of administration must include principles of organization that will insure correct decision-making, just as it must include principles that will insure effective action.”↑
Cyert, R. M., Simon, H. A., & Trow, D. B. (1956). Observation of a business decision. The Journal of Business, 29(4), 237-248.
Cyert, R. M. (1994). Positioning the organization. Interfaces, 24(2), 101-104.
Dong, J., March, J. G., & Workiewicz, M. (2017). On organizing: an interview with James G. March. Journal of Organization Design, 6(1), 14.
Thanks again for reading and for all the positive feedback. Please keep it coming. If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.
Credit: Organizational Charts by Manu CornetI first ran into Conway’s Law while helping a brand redesign their website. The client, a large consumer electronics company, was insistent that the navigation must offer three options: Shop, Learn, and Support. I valiantly tried to convince them that nobody shopping on the web, or anywhere else, thought about the distinction between shopping and learning, but they remained steadfast in their insistence. What I eventually came to understand is that their stance wasn’t born out of customer need or insight, but rather their own organizational chart, which shockingly included a sales department, a marketing department, and a support department.
“Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” That’s the way computer scientist and software engineer Melvin Conway put it in a 1968 paper titled “How Do Committees Invent?” His point was that the choices we make before start designing any system most often fundamentally shapes the final output.1 Or, as he put it, “the very act of organizing a design team means that certain design decisions have already been made.”
Parkinson aside (who did so mostly in jest), very few have the chutzpah to actually name a law after themselves and Conway wasn’t responsible for the law’s coining. That came a few months after the “Committees” article was published from a fan and fellow computer scientist George Mealy. In his paper for the July 1968 National Symposium on Modular Programming (which I seem to be one of the very few people to have actually tracked down), Mealy examined four bits of “conventional wisdom” that surrounded the development of software systems at the time. Number four came directly from Conway: “Systems resemble the organizations that produced them.” The naming comes 3 pages in:
Our third aphorism-“if one programmer can do it in one year, two programmers can do it in two years”-is merely a reflection of the great difficulty of communication in a large organization. The crux of the problem of giganticism [sic] and system fiasco really lies in the fourth dogma. This — “systems resemble the organizations that produced them” — has been noticed by some of us previously, but it appears not to have received public expression prior to the appearance of Dr. Melvin E. Conway’s penetrating article in the April 1968 issue of Datamation. The article was entitled “How Do Committees Invent?”. I propose to call my preceding paraphrase of the gist of Conway’s paper “Conway’s Law”.
While most, including Conway on his own website, credit Fred Brooks’ 1975 Mythical Man Month with naming the law, it seems that Mealy deserves the credit (though Brooks’ book is surely the reason so many know about Conway’s important concept).3Back to the questions at hand: Why does this happen, where does it happen, and what can we do about it?
Let’s start with the why. This seems like it should be easy to answer, but it’s actually not. The answer starts with some basics of hierarchy and modularity that Herbert Simon offered up in his Parable of Two Watchmakers: Mainly, breaking a system down into sets of modular subsystems seems to be the most efficient design approach in both nature and organizations. For that reason we tend to see companies made up of teams which are then made up of more teams and so-on. But that still doesn’t answer the question of why they tend to design systems in their image. To answer that we turn to some of the more recent research around the “mirroring hypothesis,” which (in simplified terms) is an attempt to prove out Conway’s Law. Carliss Baldwin, a professor at Harvard Business School, seems to be spearheading much of this work and has been an author on two of the key papers on the subject. Most recently, “The mirroring hypothesis: theory, evidence, and exceptions” is a treasure trove of information and citations. Her theory as to why mirroring occurs is essentially that it makes life easier for everyone who works at the company:
The mirroring of technical dependencies and organizational ties can be explained as an approach to organizational problem-solving that conserves scarce cognitive resources. People charged with implementing complex projects or processes are inevitably faced with interdependencies that create technical problems and conflicts in real time. They must arrive at solutions that take account of the technical constraints; hence, they must communicate with one another and cooperate to solve their problems. Communication channels, collocation, and employment relations are organizational ties that support communication and cooperation between individuals, and thus, we should expect to see a very close relationship—technically a homomorphism—between a network graph of technical dependencies within a complex system and network graphs of organizational ties showing communication channels, collocation, and employment relations.
It’s all still a bit circular, but the argument that in most cases a mirrored product is both reasonably optimal from a design perspective (since organizations are structured with hierarchy and modularity) and also cuts down the cognitive load by making it easy for everyone to understand (because it works like an org they already understand) seems like a reasonable one.4 The paper then goes on to survey the research to understand what kind of industries mirroring is most likely to occur and the answer seems to be everywhere. They found evidence from across expected places like software and semiconductors, but also automotive, defense, sports, and even banking and construction. For what it’s worth, I’ve also seen it across industries in marketing projects throughout my own career.
That’s the why and the where, which only leaves us with the question of what an organization can do about it. Here there seem to be a few different approaches. The first one is to do nothing. After all, it may well be the best way to design a system for that organization/problem. The second is to find an appropriate balance. If you buy the idea that some part of mirroring/Conway’s Law is simply about making it easier to understand and maintain systems, than its probably good to keep some mirroring. But it doesn’t need to be all or nothing. In the aforementioned paper, Baldwin and her co-authors have a nice little framework for thinking about different approaches to mirroring depending on the kind of business:
As you see at the bottom of the framework you have option three: “Strategic mirror-breaking.” This is also sometimes called an “inverse Conway maneuver” in software engineering circles: An approach where you actually adjust your organizational model in order to change the way your systems are architected.5 Basically you attempt to outline the type of system design you want (most of the time it’s about more modularity) and you back into an org structure that looks like that.
Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.
A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.
Like I said before, it’s all quite circular. It’s a bit like the famous quote “We shape our tools and thereafter our tools shape us.” Companies organize themselves and in turn design systems that mirror those organizations which in turn further solidify the organizational structure that was first put in place. Conway’s Law is more guiding principle than physical property, but it’s a good model to keep in your head as you’re designing organizations or systems (or trying to disentangle them).
He was writing mostly about software systems, but as you’ll see it’s much more broadly applicable.↑
As an aside, it’s hard not to think that Mealy’s third point about what one programmer can do versus two sounds a lot like Fred Brooks’ “mythical man month” concept. Mealy worked with Brooks on OS/360 and in the book Computer Pioneers by J.A.N. Lee it’s mentioned that Mealy’s Law was also named at the 1968 symposium: “There is an incremental programmer who, when added to a project, consumes more resources than are made available.” Sounds pretty similar to me.↑
There’s a very interesting point about the role of “information hiding” in pushing companies into Conway’s Law. Essentially the idea is that companies naturally hide information within teams or departments for the sake of simplicity across the rest of the company. It would only make things more complicated, for instance, if the finance team exposed the detailed rules of GAAP accounting instead of just distributing a monthly GAAP accounting report. “Information hiding as a means of controlling complexity is a fundamental principle underlying the mirroring hypothesis. With information hiding, each module in a technical system is informationally isolated from other modules within a framework of system design rules. This means that independent individuals, teams, or firms can work separately on different modules, yet the modules will work together as a whole (Baldwin and Clark, 2000).”↑
Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative science quarterly, 9-30.
Hvatum, L. B., & Kelly, A. (2005). What do I think about Conway’s Law now?. In EuroPLoP (pp. 735-750).
Lee, J. A. (1995). International biographical dictionary of computer pioneers. Taylor & Francis.
MacCormack, A., Baldwin, C., & Rusnak, J. (2012). Exploring the duality between product and organizational architectures: A test of the “mirroring” hypothesis. Research Policy, 41(8), 1309-1324.
MacDuffie, J. P. (2013). Modularity‐as‐property, modularization‐as‐process, and ‘modularity’‐as‐frame: Lessons from product architecture initiatives in the global automotive industry. Global Strategy Journal, 3(1), 8-40.
Mealy, George, “How to Design Modular (Software) Systems,” Proc. Nat’l. Symp. Modular Programming, Information & Systems Institute, July 1968.
I’m still hard at work on writing up Conway’s Law, so sharing something I wrote a few months ago that I haven’t posted yet. If you are following along, I’m working on a book about the frameworks we all use to understand the world and these are some drafts of the work. I appreciate any feedback and hope you’ll subscribe by email if you haven’t. Thanks for reading.
Most people know the Pareto principle by it’s more common name, “the 80/20 rule.” It’s story starts in the late-1800s with the Italian economist Vilfredo Pareto. Responsible for a number of economic breakthroughs, Pareto became particularly interested in the distribution of income. After collecting wealth and tax data from a variety of countries, he noticed a consistent pattern in the distribution. Originally outlined in his first major work, Cours d’Économie Politique1, Pareto had discovered that across countries 20 percent of the population seemed to control around 80 percent of the income.
Source: “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (Translation: 2009)
Although he had uncovered the phenomena, Pareto wasn’t sure why it existed:2
It is not easy to understand a priori how and why this should happen. As I said in my Cours, it seems to me probable that the income curve is in some way dependent on the law of the distribution of the mental and physiological qualities of a certain number of individuals. If such is really the case, we can catch a glimpse of the reason why approximately the same law is to be found in the most varied manifestations of human activity. But, instead of seeing those phenomena only in dim outlines, we would like to perceive them clearly and precisely, and up till now I have not succeeded in doing so.
The specifics of 80 and 20 aren’t critical, the point is that a small portion of a specific population tends to account for a large portion of some other resource. As time has gone on we’ve found evidence for Pareto’s discovery in more and more systems: Just a few scientific papers grab most of the citations, a small portion of a company’s customers tend be responsible for large percent of its profits, a tiny number of users tends to make up the vast majority of the customer service requests, and a “vital few” factory defects account for the bulk of the production issues.
It’s that last one about factories that we have to thank for the popularity of the Pareto principle. Quality control pioneer (and catchy name-coiner) Joseph Juran explains:
It was during the late 1940s, when I was preparing the manuscript for Quality Control Handbook, First Edition, that I was faced squarely with the need for giving a short name to the universal. In the resulting write-up under the heading “Maldistribution of Quality Losses,” I listed numerous instances of such maldistribution as a basis for generalization. I also noted that Pareto had found wealth to be maldistributed. In addition, I showed examples of the now familiar cumulative curves, one for maldistribution of wealth and the other for maldistribution of quality losses. The caption under these curves reads “Pareto’s principle of unequal distribution applied to distribution of wealth and to distribution of quality losses.”
Juran went on to become an important management thinker and the Pareto principle spread through industry and the broader world.3 At this point the 80/20 rule has become a basic and helpful mental model that many managers understand.
But we still haven’t answered Pareto’s original question: What it is about human nature that causes this massive imbalance to continually emerge in such a variety of systems? To answer that we turn to Albert-László Barabási and his study of networks. As the web was emerging, Barabási and his colleagues were busy analyzing the new and rich datasets it generated. Every time they dug in, the same odd pattern emerged.
In one of their studies, the team set up a crawler to look at how different web pages linked to each other. Expecting to see a bell curve, they instead spotted something very different: “the network our robot brought back from its journey had many nodes with a few links only, and a few hubs with an extraordinarily large number of links.” Barabási continues, “The biggest surprise came when we tried to fit the histogram of the node connectivity on a so-called log-log plot. The fit told us that the distribution of links on various Webpages precisely follows a mathematical expression called a power law.”
What made this discovery so important was that power laws are a signal that you’re not working with random data. If you chart random (or more precisely disconnected) data points, like the heights of people in your town or the scores of students on a test, you see a bell curve distribution. However, if you chart non-random interdependent data points you get the power curve that Barabási kept seeing:
Power laws rarely emerge in systems completely dominated by a roll of the dice. Physicists have learned that most often they signal a transition from disorder to order. Thus the power laws we spotted on the Web indicated, for the first time in precise mathematical terms, that real networks are far from random. Complex networks finally started to speak to us in a language that scientists trained in self-organization and complexity could finally understand. They spoke of order and emerging behavior. We just needed to listen carefully.
So we come full-circle back to Pareto, who once explained that, “The molecules in the social system are interdependent in space and in time. Their interdependence in space becomes apparent in the mutual relations that subsist between social phenomena.” The 80/20 rule is present in systems where there are self-organizing interdependent parts and its subject to the same cumulative advantage mechanics we saw with popular music. That’s why the pattern emerges so often in companies and markets: It means a huge number of forces are pushing and, critically, reacting to each other at the same time.
As should be reasonably obvious, the 80/20 rule has a number of important effects and implications for everyday business and life (many of which will come up in other models). First, understanding when you’re working in a system susceptible to the Pareto principle is critical. Once understood, being able to accurately isolate the 20 percent and find ways to make it less interdependent can fundamentally alter the balance of the equation. One of the simplest conclusions to be drawn from the 80/20 rule is that sometimes you need to fire a customer or an employee who is responsible for eating up the majority of your resources, as painful as that choice may be.
I had a shockingly difficult time finding translations of Pareto’s work. This seems to have to do with a few different things. One (and this is purely speculation), I wonder if his decision to focus more attention on sociology hurt his economics credentials. Second, and this seems much more established, the fact that he was recognized by the Italian fascists before he died seems to have sullied his reputation and potentially slowed down the translation of his work.↑
As an aside, this seems to be a big part of why he went into sociology. As he discovered the 80/20 rule he wondered what it was about human nature that makes this happen. His work in sociology seems like, at least from the reading I did, trying to answer that question in one way or another. Now I’m definitely no Pareto expert and this might be a vast overread.↑
Interestingly, Juran also recognized that the Pareto principle wasn’t well named: “Although the accompanying text makes clear that Pareto’s contributions specialized in the study of wealth, the caption implies that he had generalized the principle of unequal distribution into a universal. This implication is erroneous. The Pareto principle as a universal was not original with Pareto.”↑
Chipman, John S. “Pareto: manuel of political economy.” English translation, available at http://www.econ.umn.edu/~jchipman/DALLOZ5.pdf, of ‘Pareto: Manuel di d’Économie Politique’ in Dictionnaire des grandes oeuvres d’économise, X Greffe, j. Lallemant and M De Vroey (eds), Paris: Dalloz (2002): 424-433.
Cirillo, Renato. “Was Vilfredo Pareto Really a ‘Precursor’ of Fascism.?.” American Journal of Economics and Sociology 42.2 (1983): 235-246.
Crawford, Walt. “Exceptional institutions: libraries and the Pareto principle.” American Libraries 32.6 (2001): 72-74.
Edgeworth, F. Y., and Vilfredo Pareto. “Controversy Between Pareto and Edgeworth.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 425-440.
Hazlitt, Henry. “Pareto’s Picture of Society: His Monumental Work Covers an Enormous Field of Knowledge.” New York Times (May 26, 1935).
Juran, Joseph M. “Pareto, lorenz, cournot, bernoulli, juran and others.” (1950).
Juran, Joseph, and A. Blanton Godfrey. “Quality handbook.” Republished McGraw-Hill (1999).
Juran, Joseph M. “The non-Pareto principle; mea culpa.” Quality Progress 8.5 (1975): 8-9.
Juran, Joseph M. “Universals in management planning and controlling.” Management Review 43.11 (1954): 748-761.
Koch, Richard. The 80/20 principle: the secret to achieving more with less. Crown Business, 2011.
Lopreato, Joseph. “Notes on the work of Vilfredo Pareto.” Social Science Quarterly (1973): 451-468.
Mandelbrot, Benoit, and Richard L. Hudson. The Misbehavior of Markets: A fractal view of financial turbulence. Basic books, 2007.
Moore, H. L. “Cours d’Économie Politique. By VILFREDO PARETO, Professeur à l’Université de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.” The ANNALS of the American Academy of Political and Social Science 9.3 (1897): 128-131.
Pareto, Vilfredo. “Supplement to the Study of the Income Curve.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 441-451.
Pareto, Vilfredo. “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (2009): 132-143.
Pareto, Vilfredo. The mind and society: Trattato di sociologia generale. AMS Press, 1935.
Tarascio, Vincent J. “The Pareto law of income distribution.” Social Science Quarterly (1973): 525-533.
Another framework of the day. If you haven’t read the others, the links are all at the bottom. I’m working on a book of mental models and sharing some of the research and writing as I go. This post actually started in writing about Conway’s Law, which is coming soon. I felt like I had to get this out first, as I would need to rely on some of the research in giving the Law its due. Thanks for reading and please let me know what you think, pass this link on, and subscribe to the email if you haven’t done it already. Thanks for reading.
This framework is a little different than the ones before as it doesn’t come with a nice diagram or four box. Rather, the Parable of Two Watchmakers is just that: A story about two people putting together complicated mechanical objects. The parable comes from a paper called “The Architecture of Complexity” written by Nobel-prize winning economist Herbert Simon (you might remember Simon from the theory of satisficing). Beyond being a brilliant economist, Simon was also a major thinker in the worlds of political science, psychology, systems, complexity, and artificial intelligence (in doing this research he climbed up the ranks of my intellectual heroes).
In his 1962 he laid out an argument for how complexity emerges, which is largely focused on the central role of hierarchy in complex systems. To start, let’s define hierarchy so we’re all on the same page. Here’s Simon:
Etymologically, the word “hierarchy” has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a “boss” and a set of subordinate subsystems. Each of the subsystems has a “boss” who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems. (In fact, even in human organizations, the formal hierarchy exists only on paper; the real flesh-and-blood organization has many inter-part relations other than the lines of formal authority.) For lack of a better term, I shall use hierarchy in the broader sense introduced in the previous paragraphs, to refer to all complex systems analyzable into successive sets of subsystems, and speak of “formal hierarchy” when I want to refer to the more specialized concept.
So it’s more or less the way we think of it, except he is drawing a distinction to the formal hierarchy we see in an org chart where each subordinate has just one boss and the informal hierarchy that actually exists inside organizations, where subordinates interact in a variety of ways. And he points out the many complex systems we find hierarchy, including biological systems, “The hierarchical structure of biological systems is a familiar fact. Taking the cell as the building block, we find cells organized into tissues, tissues into organs, organs into systems. Moving downward from the cell, well-defined subsystems — for example, nucleus, cell membrane, microsomes, mitochondria, and so on — have been identified in animal cells.”
The question is why did all these systems come to be arranged this way and what can we learn from them? Here Simon turns to story:
Let me introduce the topic of evolution with a parable. There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently — new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?
The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down — to answer the phone say— it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him, the more difficult it became for him to find enough uninterrupted time to finish a watch.
The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus.
Whether the complexity emerges from the hierarchy or the hierarchy from the complexity, he illustrates clearly why we see this pattern all around us and articulates the value of the approach. It’s not just hierarchy, he goes on to explain, but also modularity (which he refers to as near-decomposability) that appears to be a fundamental property of complex systems. That is, each of the subsystems operates both independently and as part of the whole. As Simon puts it, “Intra-component linkages are generally stronger than intercomponent linkages” or, even more simply, “In a formal organization there will generally be more interaction, on the average, between two employees who are members of the same department than between two employees from different departments.”
Why is that? Well, for one, it’s an efficiency thing. Just as we see inside organizations, we want to use specialized resources in a specialized way. But beyond that, as Simon outlines in the parable, it’s also about resiliency: By relying on subsystems you have a defense against catastrophic failure when one piece of the whole breaks down. Just as Hora was able to quickly start building again when he put something down, any system made up of subsystems should be much more capable of dealing with changes in environment. It works in organisms, companies, and even empires, as Simon pointed out in The Sciences of the Artificial:
We have not exhausted the categories of complex systems to which the watchmaker argument can reasonably be applied. Philip assembled his Macedonian empire and gave it to his son, to be later combined with the Persian subassembly and others into Alexander’s greater system. On Alexander’s death his empire did not crumble to dust but fragmented into some of the major subsystems that had composed it.
Hopefully the application of this framework is pretty clear (and also instructive) in every day business life. Interestingly, Simon’s theories were the ultimate inspiration for a management fad we saw burn bright (and flame out) just a few years ago: Holacracy, the fluid organizational structure made up of self-organizing teams. Invented by Brian Robertson and made famous by Tony Hsieh and Zappos, the method (it’s a registered trademark) is based on ideas about “holons” from Hungarian author and journalist Arthur Koestler. In his 1967 book The Ghost in the Machine, Koestler repeats Simon’s story of Tempus and Hora and then goes on to theorize that holons (a name he coined “from the Greek holos—whole, with the suffix on (cf. neutron, proton) suggesting a particle or part”) are “meant to supply the missing link between atomism and holism, and to supplant the dualistic way of thinking in terms of ‘parts’ and ‘wholes,’ which is so deeply engrained in our mental habits, by a multi-levelled, stratified approach. A hierarchically-organized whole cannot be “reduced” to its elementary parts; but it can be ‘dissected’ into its constituent branches of holons, represented by the nodes of the tree-diagram, while the lines connecting the holons stand for channels of communication, control or transportation, as the case may be.”
Holacracy aside, there’s a ton of goodness in the parable and the architecture of modularity that it posits as critical. It’s not an accident that every company is built this way and as we think about those companies designing systems, it’s also not surprising many of those should also follow suit (a good lead-in for Conway’s Law, which is up next). Although I’m pretty out of words at this point, Simon also applies the same hierarchy/modularity concept to problem solving and there’s a pretty good argument to be made that the “latticework of models” Charlier Munger described in his 1994 USC Business School Commencement Address would fit the framework.
Egidi, Massimo, and Luigi Marengo. “Cognition, institutions, near decomposability: rethinking Herbert Simon’s contribution.” (2002).
Egidi, Massimo. “Organizational learning, problem solving and the division of labour.” Economics, bounded rationality and the cognitive revolution. Aldershot: Edward Elgar (1992): 148-73.