Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Variance Spectrum [Framework of the Day]

If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.

The vast majority of the models I’ve written about were ones that I discovered at one time or another and have adopted for my own knowledge portfolio. The Variance Spectrum, on the other hand, I came up with. Its origin was in trying to answer a question about why there wasn’t a centralized “system of record” for marketing in the same way you would find one in finance (ERP) or sales (CRM). My best answer was that the output of marketing made it particularly difficult to design a system that could satisfy the needs of all its users. Specifically, I felt as though the variance of marketing’s output, the fact that each campaign and piece of content is meant to be different than the one that came before it, made for an environment that at first seemed opposed to the basics of systemization that the rest of a company had come to accept.

To illustrate the idea I plotted a spectrum. The left side represented zero variance, the realm of manufacturing and Six Sigma, and the right was 100 percent variance, where R&D and innovation reign supreme.

While the poles of the spectrum help explain it, it’s what you place in the middle that makes it powerful. For example, we could plot the rest of the departments in a company by the average variance of their output (finance is particularly low since so much of the department’s output is “governed” — quite literally the government sets GAAP accounting standards and mandates specific tax forms). Sales is somewhere in the middle: A pretty good mix of process and methodology plus the “art of the deal”. Marketing, meanwhile, sits off to the right, just behind R&D.

But that’s just the first layer. Like so many parts of an organization (and as described in my essays on both The Parable of Two Watchmakers and Conway’s Law), companies are hierarchical and at any point in the spectrum you can drill in and find a whole new spectrum of activities that range from low variance to high variance. That is, while finance may be “low variance” on average thanks to government standards, forecasting and modeling is most certainly a high variance function: Something that must be imagined in original ways depending on a number of variables include the company, and its products and markets (to name a few). Zooming in on marketing we find a whole new set of processes that can themselves be plotted based on the variance of their output, with governance far to the low variance side and creative development clearly on the other pole. Another way to articulate these differences is that the low variance side represents the routine processes and the right the creative.

While I haven’t seen anyone else plot things quite this way, this idea, that there are fundamentally different kinds of tasks within a company, is not new. Organizational theorists Richard Cyert, Herbert Simon, and Donald Trow, also noted this duality in paper from 1956 called “Observation of a Business Decision“:1

At one extreme we have repetitive, well-defined problems (e.g., quality control or production lot-size problems) involving tangible considerations, to which the economic models that call for finding the best among a set of pre-established alternatives can be applied rather literally. In contrast to these highly programmed and usually rather detailed decisions are problems of a non-repetitive sort, often involving basic long-range questions about the whole strategy of the firm or some part of it, arising initially in a highly unstructured form and requiring a great deal of the kinds of search processes listed above. In this whole continuum, from great specificity and repetition to extreme vagueness and uniqueness, we will call decisions that lie toward the former extreme programmed, and those lying toward the latter end non-programmed. This simple dichotomy is just a shorthand for the range of possibilities we have indicated.

This also introduces an interesting additional way to think about the spectrum: The left side is representative of those ideas where you have the most clarity about the final goal (in manufacturing you know exactly what you want the output to look like when it’s done) and the right the most ambiguity (the goal of R&D is to make something new). For that reason, high variance tasks should also fail far more often than their low variance counterparts: Nine out of ten new product ideas might be a good batting average, but if you are throwing away 90 percent of your manufactured output you’ve massively failed.

Even though it may be tempting, that’s not a reason to focus purely on the well-structured, low-variance problems, as Richard Cyert laid out in a 1994 paper titled “Positioning the Organization“:

It is difficult to deal with the uncertainty of the future, as one must to relate an organization to others in the industry and to events in the economy that may affect it. One must look ahead to determine what forces are at work and to examine the ways in which they will affect the organization. These activities are less structured and more ambiguous than dealing with concrete problems and, therefore, the CEO may have trouble focusing on them. Many experiments show that structured activity drives out unstructured. For example, it is much easier to answer one’s mail than to develop a plan to change the culture of the organization. The implications of change are uncertain and the planning is unstructured. One tends to avoid uncertainty and to concentrate on structured problems for which one can correctly predict the solutions and implications.2

Going a level deeper, another way to cut the left and right sides of the spectrum is based on the most appropriate way to solve the problem. For the routine tasks you want to have a single way of doing things in an attempt to push down the variance of the output while on the high variance side you have much more freedom to try different approaches. In software terms this can be expressed as automation and collaboration respectively.

While this is primarily a framework for thinking about process, there’s a more personal way to think about the variance spectrum as it relates to giving feedback to others. It’s a common occurrence that employees over-or-misinterpret the feedback of more senior members of the team. I experienced this many times myself in my role as CEO. Because words are often taken literally from the leader of a company, an aside about something like color choice in a design comp can be easily misconstrued as an order to change when it wasn’t meant that way. The variance spectrum in that context can be used to make explicit where the feedback falls: Is it a low variance order you expect to be acted on or a high variance comment that is simply your two cents? I found this could help avoid ambiguity and also make it more clear I respected their expertise.

Footnotes:

  1. This paper is kind of amazing to read. It feels revolutionary to actually look at how specific decisions come to be made within a company.
  2. There’s a whole other really interesting area to explore here that I’m mostly skipping over about using the variance spectrum to help decide types of problems and the mix of work. Although I don’t have a specific model (hence why this is a footnote), the idea that you should decide on your portfolio of activities based on having a good diversity of work across the spectrum is fascinating and seems like a good idea. It’s also in line with a point Herbert Simon makes at the very beginning of his book Administrative Behavior: “Although any practical activity involves both ‘deciding’ and ‘doing,’ it has not commonly been recognized that a theory of administration should be concerned with the processes of decision as well as with the processes of action. This neglect perhaps stems from the notion that decision-making is confined to the formulation of over-all policy. On the contrary, the process of decision does not come to an end when the general purpose of an organization has been determined. The task of ‘deciding’ pervades the entire administrative organization quite as much as does the task of ‘doing’- indeed, it is integrally tied up with the latter. A general theory of administration must include principles of organization that will insure correct decision-making, just as it must include principles that will insure effective action.”

Bibliography

  • Cyert, R. M., Simon, H. A., & Trow, D. B. (1956). Observation of a business decision. The Journal of Business, 29(4), 237-248.
  • Cyert, R. M. (1994). Positioning the organization. Interfaces, 24(2), 101-104.
  • Dong, J., March, J. G., & Workiewicz, M. (2017). On organizing: an interview with James G. March. Journal of Organization Design, 6(1), 14.
  • March, J. G. (2010). The ambiguities of experience. Cornell University Press.
  • Simon, H. A. (2013). Administrative behavior. Simon and Schuster.
  • Stene, E. O. (1940). An approach to a science of administration. American Political Science Review, 34(6), 1124-1137.

Framework of the Day posts:

November 5, 2018 // This post is about: , , , , , , , , , ,

Conway’s Law [Framework of the Day]

Thanks again for reading and for all the positive feedback. Please keep it coming. If you haven’t read any of these yet, the gist is that I’m writing a book about mental models and writing these notes up as I go. You can find links at the bottom to the other frameworks I’ve written. If you haven’t already, please subscribe to the email and share these posts with anyone you think might enjoy them. I really appreciate it.

Credit: Organizational Charts by Manu CornetI first ran into Conway’s Law while helping a brand redesign their website. The client, a large consumer electronics company, was insistent that the navigation must offer three options: Shop, Learn, and Support. I valiantly tried to convince them that nobody shopping on the web, or anywhere else, thought about the distinction between shopping and learning, but they remained steadfast in their insistence. What I eventually came to understand is that their stance wasn’t born out of customer need or insight, but rather their own organizational chart, which shockingly included a sales department, a marketing department, and a support department.

“Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” That’s the way computer scientist and software engineer Melvin Conway put it in a 1968 paper titled “How Do Committees Invent?” His point was that the choices we make before start designing any system most often fundamentally shapes the final output.1 Or, as he put it, “the very act of organizing a design team means that certain design decisions have already been made.”

Why does this happen, where does it happen, and what can we do about it? That’s the goal of this essay, but before I get there we’ve got to take a short sojourn into the history of the concept. As I mentioned, the idea in its current form came from Melvin Conway in May of 1968. In the article he cited a few key sources as inspiration including economist John Kenneth Galbraith and historian C. Northcote Parkinson, who’s 1957 book Parkinson’s Law and Other Studies in Administration was particularly influential in spelling out the ever-increasing complexity that any bureaucratic organization will create.2 Finally, judging by the focus on modularity in Conway’s writing, it seems clear he was also inspired by Herbert Simon’s work, in particular his “Architecture of Complexity” paper and the Parable of Two Watchmakers (which I wrote about earlier).

Parkinson aside (who did so mostly in jest), very few have the chutzpah to actually name a law after themselves and Conway wasn’t responsible for the law’s coining. That came a few months after the “Committees” article was published from a fan and fellow computer scientist George Mealy. In his paper for the July 1968 National Symposium on Modular Programming (which I seem to be one of the very few people to have actually tracked down), Mealy examined four bits of “conventional wisdom” that surrounded the development of software systems at the time. Number four came directly from Conway: “Systems resemble the organizations that produced them.” The naming comes 3 pages in:

Our third aphorism-“if one programmer can do it in one year, two programmers can do it in two years”-is merely a reflection of the great difficulty of communication in a large organization. The crux of the problem of giganticism [sic] and system fiasco really lies in the fourth dogma. This — “systems resemble the organizations that produced them” — has been noticed by some of us previously, but it appears not to have received public expression prior to the appearance of Dr. Melvin E. Conway’s penetrating article in the April 1968 issue of Datamation. The article was entitled “How Do Committees Invent?”. I propose to call my preceding paraphrase of the gist of Conway’s paper “Conway’s Law”.

While most, including Conway on his own website, credit Fred Brooks’ 1975 Mythical Man Month with naming the law, it seems that Mealy deserves the credit (though Brooks’ book is surely the reason so many know about Conway’s important concept).3Back to the questions at hand: Why does this happen, where does it happen, and what can we do about it?

Let’s start with the why. This seems like it should be easy to answer, but it’s actually not. The answer starts with some basics of hierarchy and modularity that Herbert Simon offered up in his Parable of Two Watchmakers: Mainly, breaking a system down into sets of modular subsystems seems to be the most efficient design approach in both nature and organizations. For that reason we tend to see companies made up of teams which are then made up of more teams and so-on. But that still doesn’t answer the question of why they tend to design systems in their image. To answer that we turn to some of the more recent research around the “mirroring hypothesis,” which (in simplified terms) is an attempt to prove out Conway’s Law. Carliss Baldwin, a professor at Harvard Business School, seems to be spearheading much of this work and has been an author on two of the key papers on the subject. Most recently, “The mirroring hypothesis: theory, evidence, and exceptions” is a treasure trove of information and citations. Her theory as to why mirroring occurs is essentially that it makes life easier for everyone who works at the company:

The mirroring of technical dependencies and organizational ties can be explained as an approach to organizational problem-solving that conserves scarce cognitive resources. People charged with implementing complex projects or processes are inevitably faced with interdependencies that create technical problems and conflicts in real time. They must arrive at solutions that take account of the technical constraints; hence, they must communicate with one another and cooperate to solve their problems. Communication channels, collocation, and employment relations are organizational ties that support communication and cooperation between individuals, and thus, we should expect to see a very close relationship—technically a homomorphism—between a network graph of technical dependencies within a complex system and network graphs of organizational ties showing communication channels, collocation, and employment relations.

It’s all still a bit circular, but the argument that in most cases a mirrored product is both reasonably optimal from a design perspective (since organizations are structured with hierarchy and modularity) and also cuts down the cognitive load by making it easy for everyone to understand (because it works like an org they already understand) seems like a reasonable one.4 The paper then goes on to survey the research to understand what kind of industries mirroring is most likely to occur and the answer seems to be everywhere. They found evidence from across expected places like software and semiconductors, but also automotive, defense, sports, and even banking and construction. For what it’s worth, I’ve also seen it across industries in marketing projects throughout my own career.

That’s the why and the where, which only leaves us with the question of what an organization can do about it. Here there seem to be a few different approaches. The first one is to do nothing. After all, it may well be the best way to design a system for that organization/problem. The second is to find an appropriate balance. If you buy the idea that some part of mirroring/Conway’s Law is simply about making it easier to understand and maintain systems, than its probably good to keep some mirroring. But it doesn’t need to be all or nothing. In the aforementioned paper, Baldwin and her co-authors have a nice little framework for thinking about different approaches to mirroring depending on the kind of business:

As you see at the bottom of the framework you have option three: “Strategic mirror-breaking.” This is also sometimes called an “inverse Conway maneuver” in software engineering circles: An approach where you actually adjust your organizational model in order to change the way your systems are architected.5 Basically you attempt to outline the type of system design you want (most of the time it’s about more modularity) and you back into an org structure that looks like that.

In case it seems like all this might be academic, the architecture of organizations has been shown to have a fundamental on the company’s ability to innovate. Tim Harford recently wrote a piece for the Financial Times that heavily quotes a 1990 paper by an economist named Rebecca Henderson titled “Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms.” The paper outlines how the organizational structure of companies can prevent them from innovating in specific ways. Most specifically the paper describes the kind of innovation that keeps the shape of the previous generation’s product, but completely rewires it: Think film cameras to digital or the Walkman to MP3 players. Here’s Harford describing the idea:

Dominant organisations are prone to stumble when the new technology requires a new organisational structure. An innovation might be radical but, if it fits the structure that already existed, an incumbent firm has a good chance of carrying its lead from the old world to the new.

A case study co-authored by Henderson describes the PC division as “smothered by support from the parent company”. Eventually, the IBM PC business was sold off to a Chinese company, Lenovo. What had flummoxed IBM was not the pace of technological change — it had long coped with that — but the fact that its old organisational structures had ceased to be an advantage. Rather than talk of radical or disruptive innovations, Henderson and Clark used the term “architectural innovation”.

Like I said before, it’s all quite circular. It’s a bit like the famous quote “We shape our tools and thereafter our tools shape us.” Companies organize themselves and in turn design systems that mirror those organizations which in turn further solidify the organizational structure that was first put in place. Conway’s Law is more guiding principle than physical property, but it’s a good model to keep in your head as you’re designing organizations or systems (or trying to disentangle them).

Footnotes:

  1. He was writing mostly about software systems, but as you’ll see it’s much more broadly applicable.
  2. Here’s how Conway explains Parkinson’s complexity concept: “As each new brand is created it justifies itself by challenging the established order. Thus, after a while, the organization is fully occupied in internal political warfare.”
  3. As an aside, it’s hard not to think that Mealy’s third point about what one programmer can do versus two sounds a lot like Fred Brooks’ “mythical man month” concept. Mealy worked with Brooks on OS/360 and in the book Computer Pioneers by J.A.N. Lee it’s mentioned that Mealy’s Law was also named at the 1968 symposium: “There is an incremental programmer who, when added to a project, consumes more resources than are made available.” Sounds pretty similar to me.
  4. There’s a very interesting point about the role of “information hiding” in pushing companies into Conway’s Law. Essentially the idea is that companies naturally hide information within teams or departments for the sake of simplicity across the rest of the company. It would only make things more complicated, for instance, if the finance team exposed the detailed rules of GAAP accounting instead of just distributing a monthly GAAP accounting report. “Information hiding as a means of controlling complexity is a fundamental principle underlying the mirroring hypothesis. With information hiding, each module in a technical system is informationally isolated from other modules within a framework of system design rules. This means that independent individuals, teams, or firms can work separately on different modules, yet the modules will work together as a whole (Baldwin and Clark, 2000).”
  5. If you’re interested in the idea you should check out the episode of Software Engineering Radio with engineering leader Kevin Goldsmith.

Bibliography:

  • Arrow, K. J. (1985). Informational structure of the firm. The American Economic Review, 75(2), 303-307.
  • Brunton-spall, Michael (2 Nov. 2015.). The Inverse Conway Manoeuvre and Security – Michael Brunton-Spall – Medium. Medium. Retrieved from https://medium.com/@bruntonspall/the-inverse-conway-manoeuvre-and-security-55ee11e8c3a9
  • Colfer, L. J., & Baldwin, C. Y. (2016). The mirroring hypothesis: theory, evidence, and exceptions. Industrial and Corporate Change, 25(5), 709-738.
  • Conway, Melvin E. “How do committees invent.” Datamation 14.4 (1968): 28-31.
  • Conway, Melvin E. “The Tower of Babel and the Fighter Plane.” Retrieved from http://melconway.com/keynote/Presentation.pdf
  • Evans, Benedict (31 Aug. 2018.). Tesla, software and disruption. Benedict Evans. Retrieved from https://www.ben-evans.com/benedictevans/2018/8/29/tesla-software-and-disruption
  • Galbraith, J. K. (2001). The essential galbraith. HMH.
  • Harford, Tim. (6 Sept. 2018.). Why big companies squander good ideas. Financial Times. Retrieved from https://www.ft.com/content/3c1ab748-b09b-11e8-8d14-6f049d06439c
  • Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms. Administrative science quarterly, 9-30.
  • Hvatum, L. B., & Kelly, A. (2005). What do I think about Conway’s Law now?. In EuroPLoP (pp. 735-750).
  • Lee, J. A. (1995). International biographical dictionary of computer pioneers. Taylor & Francis.
  • MacCormack, A., Baldwin, C., & Rusnak, J. (2012). Exploring the duality between product and organizational architectures: A test of the “mirroring” hypothesis. Research Policy, 41(8), 1309-1324.
  • MacDuffie, J. P. (2013). Modularity‐as‐property, modularization‐as‐process, and ‘modularity’‐as‐frame: Lessons from product architecture initiatives in the global automotive industry. Global Strategy Journal, 3(1), 8-40.
  • Mealy, George, “How to Design Modular (Software) Systems,” Proc. Nat’l. Symp. Modular Programming, Information & Systems Institute, July 1968.
  • Newman, Sam. (30 Jun. 2014.). Demystifying Conway’s Law. ThoughtWorks. Retrieved from https://www.thoughtworks.com/insights/blog/demystifying-conways-law
  • Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM15(12), 1053-1058.
  • Software Engineering Radio. Kevin Goldsmith on Architecture and Organizational Design : Software Engineering Radio. Se-radio.net. Retrieved from http://www.se-radio.net/2018/07/se-radio-episode-331-kevin-goldsmith-on-architecture-and-organizational-design/
  • Van Dusen, Matthew (19 May 2016.). A principle called “Conway’s Law” reveals a glaring, biased flaw in our technology. Quartz. Retrieved from https://qz.com/687457/a-principle-called-conways-law-reveals-a-glaring-biased-flaw-in-our-technology/

Framework of the Day posts:

October 9, 2018 // This post is about: , , , , , , , , , , , , , , ,

Pareto Principle (aka 80/20 Rule) [Framework of the Day]

I’m still hard at work on writing up Conway’s Law, so sharing something I wrote a few months ago that I haven’t posted yet. If you are following along, I’m working on a book about the frameworks we all use to understand the world and these are some drafts of the work. I appreciate any feedback and hope you’ll subscribe by email if you haven’t. Thanks for reading.

Most people know the Pareto principle by it’s more common name, “the 80/20 rule.” It’s story starts in the late-1800s with the Italian economist Vilfredo Pareto. Responsible for a number of economic breakthroughs, Pareto became particularly interested in the distribution of income. After collecting wealth and tax data from a variety of countries, he noticed a consistent pattern in the distribution. Originally outlined in his first major work, Cours d’Économie Politique1, Pareto had discovered that across countries 20 percent of the population seemed to control around 80 percent of the income.

Source: “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (Translation: 2009)

Although he had uncovered the phenomena, Pareto wasn’t sure why it existed:2

It is not easy to understand a priori how and why this should happen. As I said in my Cours, it seems to me probable that the income curve is in some way dependent on the law of the distribution of the mental and physiological qualities of a certain number of individuals. If such is really the case, we can catch a glimpse of the reason why approximately the same law is to be found in the most varied manifestations of human activity. But, instead of seeing those phenomena only in dim outlines, we would like to perceive them clearly and precisely, and up till now I have not succeeded in doing so.

The specifics of 80 and 20 aren’t critical, the point is that a small portion of a specific population tends to account for a large portion of some other resource. As time has gone on we’ve found evidence for Pareto’s discovery in more and more systems: Just a few scientific papers grab most of the citations, a small portion of a company’s customers tend be responsible for large percent of its profits, a tiny number of users tends to make up the vast majority of the customer service requests, and a “vital few” factory defects account for the bulk of the production issues.

It’s that last one about factories that we have to thank for the popularity of the Pareto principle. Quality control pioneer (and catchy name-coiner) Joseph Juran explains:

It was during the late 1940s, when I was preparing the manuscript for Quality Control Handbook, First Edition, that I was faced squarely with the need for giving a short name to the universal. In the resulting write-up under the heading “Maldistribution of Quality Losses,” I listed numerous instances of such maldistribution as a basis for generalization. I also noted that Pareto had found wealth to be maldistributed. In addition, I showed examples of the now familiar cumulative curves, one for maldistribution of wealth and the other for maldistribution of quality losses. The caption under these curves reads “Pareto’s principle of unequal distribution applied to distribution of wealth and to distribution of quality losses.”

Juran went on to become an important management thinker and the Pareto principle spread through industry and the broader world.3 At this point the 80/20 rule has become a basic and helpful mental model that many managers understand.

But we still haven’t answered Pareto’s original question: What it is about human nature that causes this massive imbalance to continually emerge in such a variety of systems? To answer that we turn to Albert-László Barabási and his study of networks. As the web was emerging, Barabási and his colleagues were busy analyzing the new and rich datasets it generated. Every time they dug in, the same odd pattern emerged.

In one of their studies, the team set up a crawler to look at how different web pages linked to each other. Expecting to see a bell curve, they instead spotted something very different: “the network our robot brought back from its journey had many nodes with a few links only, and a few hubs with an extraordinarily large number of links.” Barabási continues, “The biggest surprise came when we tried to fit the histogram of the node connectivity on a so-called log-log plot. The fit told us that the distribution of links on various Webpages precisely follows a mathematical expression called a power law.”

What made this discovery so important was that power laws are a signal that you’re not working with random data. If you chart random (or more precisely disconnected) data points, like the heights of people in your town or the scores of students on a test, you see a bell curve distribution. However, if you chart non-random interdependent data points you get the power curve that Barabási kept seeing:

Power laws rarely emerge in systems completely dominated by a roll of the dice. Physicists have learned that most often they signal a transition from disorder to order. Thus the power laws we spotted on the Web indicated, for the first time in precise mathematical terms, that real networks are far from random. Complex networks finally started to speak to us in a language that scientists trained in self-organization and complexity could finally understand. They spoke of order and emerging behavior. We just needed to listen carefully.

So we come full-circle back to Pareto, who once explained that, “The molecules in the social system are interdependent in space and in time. Their interdependence in space becomes apparent in the mutual relations that subsist between social phenomena.” The 80/20 rule is present in systems where there are self-organizing interdependent parts and its subject to the same cumulative advantage mechanics we saw with popular music. That’s why the pattern emerges so often in companies and markets: It means a huge number of forces are pushing and, critically, reacting to each other at the same time.

As should be reasonably obvious, the 80/20 rule has a number of important effects and implications for everyday business and life (many of which will come up in other models). First, understanding when you’re working in a system susceptible to the Pareto principle is critical. Once understood, being able to accurately isolate the 20 percent and find ways to make it less interdependent can fundamentally alter the balance of the equation. One of the simplest conclusions to be drawn from the 80/20 rule is that sometimes you need to fire a customer or an employee who is responsible for eating up the majority of your resources, as painful as that choice may be.

Footnotes

  1. I had a shockingly difficult time finding translations of Pareto’s work. This seems to have to do with a few different things. One (and this is purely speculation), I wonder if his decision to focus more attention on sociology hurt his economics credentials. Second, and this seems much more established, the fact that he was recognized by the Italian fascists before he died seems to have sullied his reputation and potentially slowed down the translation of his work.
  2. As an aside, this seems to be a big part of why he went into sociology. As he discovered the 80/20 rule he wondered what it was about human nature that makes this happen. His work in sociology seems like, at least from the reading I did, trying to answer that question in one way or another. Now I’m definitely no Pareto expert and this might be a vast overread.
  3. Interestingly, Juran also recognized that the Pareto principle wasn’t well named: “Although the accompanying text makes clear that Pareto’s contributions specialized in the study of wealth, the caption implies that he had generalized the principle of unequal distribution into a universal. This implication is erroneous. The Pareto principle as a universal was not original with Pareto.”

Bibliography

  • Alexander, James. “Vilfredo Pareto: Sociologist and Philosopher.” Ihr.org. n.d. Web. 17 Dec. 2017. <http://www.ihr.org/jhr/v14/v14n5p10_Alexander.html>
  • Aspers, Patrik. “Crossing the boundary of economics and sociology: The case of Vilfredo Pareto.” American Journal of Economics and Sociology 60.2 (2001): 519-545.
  • Bunkley, Nick. “Joseph Juran, 103, Pioneer in Quality Control, Dies.” Nytimes.com. 3 Mar. 2008. Web. 17 Dec. 2017. <https://www.nytimes.com/2008/03/03/business/03juran.html>
  • Chipman, John S. “Pareto: manuel of political economy.” English translation, available at http://www.econ.umn.edu/~jchipman/DALLOZ5.pdf, of ‘Pareto: Manuel di d’Économie Politique’ in Dictionnaire des grandes oeuvres d’économise, X Greffe, j. Lallemant and M De Vroey (eds), Paris: Dalloz (2002): 424-433.
  • Cirillo, Renato. “Was Vilfredo Pareto Really a ‘Precursor’ of Fascism.?.” American Journal of Economics and Sociology 42.2 (1983): 235-246.
  • Crawford, Walt. “Exceptional institutions: libraries and the Pareto principle.” American Libraries 32.6 (2001): 72-74.
  • Edgeworth, F. Y., and Vilfredo Pareto. “Controversy Between Pareto and Edgeworth.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 425-440.
  • Hazlitt, Henry. “Pareto’s Picture of Society: His Monumental Work Covers an Enormous Field of Knowledge.” New York Times (May 26, 1935).
  • Juran, Joseph M. “Pareto, lorenz, cournot, bernoulli, juran and others.” (1950).
  • Juran, Joseph, and A. Blanton Godfrey. “Quality handbook.” Republished McGraw-Hill (1999).
  • Juran, Joseph M. “The non-Pareto principle; mea culpa.” Quality Progress 8.5 (1975): 8-9.
  • Juran, Joseph M. “Universals in management planning and controlling.” Management Review 43.11 (1954): 748-761.
  • Koch, Richard. The 80/20 principle: the secret to achieving more with less. Crown Business, 2011.
  • Lopreato, Joseph. “Notes on the work of Vilfredo Pareto.” Social Science Quarterly (1973): 451-468.
  • Mandelbrot, Benoit, and Richard L. Hudson. The Misbehavior of Markets: A fractal view of financial turbulence. Basic books, 2007.
  • Moore, H. L. “Cours d’Économie Politique. By VILFREDO PARETO, Professeur à l’Université de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.” The ANNALS of the American Academy of Political and Social Science 9.3 (1897): 128-131.
  • Pareto, Vilfredo. “Supplement to the Study of the Income Curve.” Giornale degli Economisti e Annali di Economia 67.3 (2008): 441-451.
  • Pareto, Vilfredo. “The Curve of the Distribution of Wealth.” History of Economic Ideas 17.1 (2009): 132-143.
  • Pareto, Vilfredo. The mind and society: Trattato di sociologia generale. AMS Press, 1935.
  • Tarascio, Vincent J. “The Pareto law of income distribution.” Social Science Quarterly (1973): 525-533.

Framework of the Day posts:

October 1, 2018 // This post is about: , , , , , , ,

Parable of Two Watchmakers [Framework of the Day]

Another framework of the day. If you haven’t read the others, the links are all at the bottom. I’m working on a book of mental models and sharing some of the research and writing as I go. This post actually started in writing about Conway’s Law, which is coming soon. I felt like I had to get this out first, as I would need to rely on some of the research in giving the Law its due. Thanks for reading and please let me know what you think, pass this link on, and subscribe to the email if you haven’t done it already. Thanks for reading.

This framework is a little different than the ones before as it doesn’t come with a nice diagram or four box. Rather, the Parable of Two Watchmakers is just that: A story about two people putting together complicated mechanical objects. The parable comes from a paper called “The Architecture of Complexity” written by Nobel-prize winning economist Herbert Simon (you might remember Simon from the theory of satisficing). Beyond being a brilliant economist, Simon was also a major thinker in the worlds of political science, psychology, systems, complexity, and artificial intelligence (in doing this research he climbed up the ranks of my intellectual heroes).

In his 1962 he laid out an argument for how complexity emerges, which is largely focused on the central role of hierarchy in complex systems. To start, let’s define hierarchy so we’re all on the same page. Here’s Simon:

Etymologically, the word “hierarchy” has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a “boss” and a set of subordinate subsystems. Each of the subsystems has a “boss” who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems. (In fact, even in human organizations, the formal hierarchy exists only on paper; the real flesh-and-blood organization has many inter-part relations other than the lines of formal authority.) For lack of a better term, I shall use hierarchy in the broader sense introduced in the previous paragraphs, to refer to all complex systems analyzable into successive sets of subsystems, and speak of “formal hierarchy” when I want to refer to the more specialized concept.

So it’s more or less the way we think of it, except he is drawing a distinction to the formal hierarchy we see in an org chart where each subordinate has just one boss and the informal hierarchy that actually exists inside organizations, where subordinates interact in a variety of ways. And he points out the many complex systems we find hierarchy, including biological systems, “The hierarchical structure of biological systems is a familiar fact. Taking the cell as the building block, we find cells organized into tissues, tissues into organs, organs into systems. Moving downward from the cell, well-defined subsystems — for example, nucleus, cell membrane, microsomes, mitochondria, and so on — have been identified in animal cells.”

The question is why did all these systems come to be arranged this way and what can we learn from them? Here Simon turns to story:

Let me introduce the topic of evolution with a parable. There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently — new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?

The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down — to answer the phone say— it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him, the more difficult it became for him to find enough uninterrupted time to finish a watch.

The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus.

Whether the complexity emerges from the hierarchy or the hierarchy from the complexity, he illustrates clearly why we see this pattern all around us and articulates the value of the approach. It’s not just hierarchy, he goes on to explain, but also modularity (which he refers to as near-decomposability) that appears to be a fundamental property of complex systems. That is, each of the subsystems operates both independently and as part of the whole. As Simon puts it, “Intra-component linkages are generally stronger than intercomponent linkages” or, even more simply, “In a formal organization there will generally be more interaction, on the average, between two employees who are members of the same department than between two employees from different departments.”

Why is that? Well, for one, it’s an efficiency thing. Just as we see inside organizations, we want to use specialized resources in a specialized way. But beyond that, as Simon outlines in the parable, it’s also about resiliency: By relying on subsystems you have a defense against catastrophic failure when one piece of the whole breaks down. Just as Hora was able to quickly start building again when he put something down, any system made up of subsystems should be much more capable of dealing with changes in environment. It works in organisms, companies, and even empires, as Simon pointed out in The Sciences of the Artificial:

We have not exhausted the categories of complex systems to which the watchmaker argument can reasonably be applied. Philip assembled his Macedonian empire and gave it to his son, to be later combined with the Persian subassembly and others into Alexander’s greater system. On Alexander’s death his empire did not crumble to dust but fragmented into some of the major subsystems that had composed it.

Hopefully the application of this framework is pretty clear (and also instructive) in every day business life. Interestingly, Simon’s theories were the ultimate inspiration for a management fad we saw burn bright (and flame out) just a few years ago: Holacracy, the fluid organizational structure made up of self-organizing teams. Invented by Brian Robertson and made famous by Tony Hsieh and Zappos, the method (it’s a registered trademark) is based on ideas about “holons” from Hungarian author and journalist Arthur Koestler. In his 1967 book The Ghost in the Machine, Koestler repeats Simon’s story of Tempus and Hora and then goes on to theorize that holons (a name he coined “from the Greek holos—whole, with the suffix on (cf. neutron, proton) suggesting a particle or part”) are “meant to supply the missing link between atomism and holism, and to supplant the dualistic way of thinking in terms of ‘parts’ and ‘wholes,’ which is so deeply engrained in our mental habits, by a multi-levelled, stratified approach. A hierarchically-organized whole cannot be “reduced” to its elementary parts; but it can be ‘dissected’ into its constituent branches of holons, represented by the nodes of the tree-diagram, while the lines connecting the holons stand for channels of communication, control or transportation, as the case may be.”

Holacracy aside, there’s a ton of goodness in the parable and the architecture of modularity that it posits as critical. It’s not an accident that every company is built this way and as we think about those companies designing systems, it’s also not surprising many of those should also follow suit (a good lead-in for Conway’s Law, which is up next). Although I’m pretty out of words at this point, Simon also applies the same hierarchy/modularity concept to problem solving and there’s a pretty good argument to be made that the “latticework of models” Charlier Munger described in his 1994 USC Business School Commencement Address would fit the framework.

Bibliography:

  • Egidi, Massimo, and Luigi Marengo. “Cognition, institutions, near decomposability: rethinking Herbert Simon’s contribution.” (2002).
  • Egidi, Massimo. “Organizational learning, problem solving and the division of labour.” Economics, bounded rationality and the cognitive revolution. Aldershot: Edward Elgar (1992): 148-73.
  • Koestler, Arthur, and John R. Smythies. Beyond Reductionism, New Perspectives in the Life Sciences [Proceedings of] the Alpbach Symposium [1968]. (1972).
  • Koestler, Arthur. “The ghost in the machine.” (1967).
  • Radner, Roy. “Hierarchy: The economics of managing.” Journal of economic literature 30.3 (1992): 1382-1415.
  • Simon, Herbert A. “Near decomposability and the speed of evolution.” Industrial and corporate change 11.3 (2002): 587-599.
  • Simon, Herbert A. “The Architecture of Complexity.” Proceedings of the American Philosophical Society 106.6 (1962): 467-482.
  • Simon, Herbert A. “The science of design: Creating the artificial.” Design Issues (1988): 67-82.
  • Simon, Herbert A. The sciences of the artificial. MIT press, 1996.

Framework of the Day posts:

September 18, 2018 // This post is about: , , , , , , , , ,

Known Unknowns [Framework of the Day]

As some of you may know I’ve been collecting mental models and working on a book for a little while now (it’s been going pretty slow since my daughter was born in January). This is more notes than chapter, but I still thought it was worth sharing. If you like this I’m happy to do more in the future (I wrote about the pace layers framework in my last post). Oh, and if you haven’t already, sign up to get my new blog posts by email, it’s the best way to keep up.

By all accounts Donald Rumsfeld was a man who didn’t suffer from a shortage of self-confidence. Whether it was Meet the Press, Errol Morris’s documentary Unknown Known (it’s also worth reading the four-part series Morris wrote on Rumsfeld/the documentary for the New York Times), or a grilling from Jon Stewart on the Daily Show, he always seemed supremely satisfied with his own certainty. Which must have made the public response to what’s become his most famous comment all the more vexing. At a Department of Defense briefing in February, 2002, then Secretary of Defense Rumsfeld was asked about evidence to support claims of Iraq helping to supply terrorist organizations with weapons of mass destruction. “Because,” the questioner explained, “there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.”

Rumsfeld famously replied:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

While it’s a mouthful and the context shouldn’t be lost, there’s a useful framework buried in Rumsfeld’s dodge. It looks something like this:

(Give the whole article from the Project Management Institute on how to apply known unknowns to project management a read.)

Rumsfeld went on to title his memoir Known and Unknown, and explained his perspective on its meaning early in the book:

At first glance, the logic may seem obscure. But behind the enigmatic language is a simple truth about knowledge: There are many things of which we are completely unaware—in fact, there are things of which we are so unaware, we don’t even know we are unaware of them. Known knowns are facts, rules, and laws that we know with certainty. We know, for example, that gravity is what makes an object fall to the ground. Known unknowns are gaps in our knowledge, but they are gaps that we know exist. We know, for example, that we don’t know the exact extent of Iran’s nuclear weapons program. If we ask the right questions we can potentially fill this gap in our knowledge, eventually making it a known known. The category of unknown unknowns is the most difficult to grasp. They are gaps in our knowledge, but gaps that we don’t know exist. Genuine surprises tend to arise out of this category. Nineteen hijackers using commercial airliners as guided missiles to incinerate three thousand men, women, and children was perhaps the most horrific single unknown unknown America has experienced.

Rumsfeld was obsessed with Pearl Harbor. In his memoir he quotes a foreword written by game theorist/nuclear strategist Thomas Schelling that introduced a book about the attack by Roberta Wohlstetter. Schelling wrote (emphasis mine):

If we think of the entire U.S. government and its far-flung military and diplomatic establishment, it is not true that we were caught napping at the time of Pearl Harbor. Rarely has a government been more expectant. We just expected wrong. And it was not our warning that was most at fault, but our strategic analysis. We were so busy thinking through some “obvious” Japanese moves that we neglected to hedge against the choice that they actually made.

And it was an “improbable” choice; had we escaped surprise, we might still have been mildly astonished. (Had we not provided the target, though, the attack would have been called off.) But it was not all that improbable. If Pearl Harbor was a long shot for the Japanese, so was war with the United States; assuming the decision on war, the attack hardly appears reckless. There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.

In other words, unknown unknowns.

Outside of politics, the framework is a useful way to categorize risk/uncertainty in life or business. I got interested and dug around a bit to find the historical context for the idea, which led me in a few different directions.

Rumsfeld credits William R. Graham at NASA with first introducing him to the concept in the late-90s, though it turns out to go back a lot further than that. The oldest reference I could find comes from a 1968 issue of the Armed Forces Journal International. In the article “The ‘Known Unknowns’ And The ‘Unknown Unknowns'” about the procurement of new weapons. The article opens like this:

Cheyenne was the first major Army weapon to be developed under DoD’s sometimes controversial contract definition procedures. General Bunker put the process in perspective by pointing out that no procedural system can entirely eliminate “surprises” from happening during development of a complex weapons system, and that contract definition wasn’t expected to. “But,” he pointed out, “there are two kinds of technical problems: there are the known unknowns, and the unknown unknowns. Contract definition has helped eliminate the known unknowns. It cannot eliminate completely potential cost overruns, because these are due largely to the unknown unknowns.”

The term pops up throughout the 70s in relation to military procurement. Sometime in there some folks also start using the term “unk-unks” to refer to the most dangerous of the four boxes. Here it is in context from a 1982 New Yorker piece on the airplane industry:

The excitement of this business lies in the sweep of the uncertainties. Matters as basic as the cost of the product — the airplane — and its break-even point are obscure because so much else is uncertain or unclear. The fragility of the airline industry does, of course, create uncertainties about the size and the reliability of the market for a new airplane or a new variant of an existing airplane. Then, there is a wide range of unknowns, for which an arbitrarily fixed amount of money must be set aside in the development budget. Some of these are so-called known unknowns; others are thought of as unknown unknowns and are called “unk-unks.” The assumption is that normal improvements in an airplane program or an engine program will create problems of a familiar kind that add to the costs; these are the known unknowns. The term “unk-unks” is used to cover less predictable contingencies; the assumption is that any new airplane or engine intended to advance the state of the art will harbor surprises in the form of problems that are wholly unforeseen, and perhaps even novel, and these must be taken account of in the budget.

Some are even trying to use it as a kind of code word for breakthrough innovations.

Finally, although it’s not clear they’re connected, there’s a very similar framework from psychologists Joseph Luft and Harrington Ingham from 1955 called the Johari Window. The model attempts to visualize the effects of our knowledge of self and how that works in relation to the knowledge of others:

Quadrant I, the area of free activity, refers to behavior and motivation known to self and known to others.
Quadrant II, the blind area, where others can see things in ourselves of which we are unaware.
Quadrant III, the avoided or hidden area, represents things we know but do not reveal to others (e.g, a hidden agenda or matters about which we have sensitive feelings)
Quadrant IV, area of unknown activity. Neither the individual nor others are aware of certain behaviors or motives: Yet we can assume their existence because eventually some of these things become known, and it motives were influencing relationships all along.

Despite the context for the original quote, the idea is a useful way to think about strategy and understand the various risks you might face.

Bibliography:

Framework of the Day posts:

August 23, 2018 // This post is about: , , , , , , , ,

Pace Layers [Framework of the Day]

As some of you may know I’ve been collecting mental models and working on a book for a little while now (it’s been going pretty slow since my daughter was born in January). This is more notes than chapter, but I still thought it was worth sharing. If you like this I’m happy to do more in the future. Oh, and if you haven’t already, sign up to get my new blog posts by email, it’s the best way to keep up.

Pace Layers

This one comes from Stewart Brand and is a way to explain the different speed various layers of society moves. The outer layer, fashion, is the quickest, while the innermost layer, nature, moves most slowly. Each layer interacts with one another as inventions and ideas get digested. As Brand explains:

The job of fashion and art is to be froth—quick, irrelevant, engaging, self-preoccupied, and cruel.  Try this!  No, no, try this!  It is culture cut free to experiment as creatively and irresponsibly as the society can bear.  From all that variety comes driving energy for commerce (the annual model change in automobiles) and the occasional good idea or practice that sifts down to improve deeper levels, such as governance becoming responsive to opinion polls, or culture gradually accepting “multiculturalism” as structure instead of just entertainment.

Brand’s inspiration for the framework came from an architect named Frank Duffy who encouraged builders not to think of a building as a single entity, but as a set of layers operating at different timescales. Duffy included four timescales: Shell, services, scenery, and sets (represented below).

Brand picked up on Duffy’s work and adapted it to a kind of proto-pace layer framework in his 1994 book How Buildings Learn: What Happens After They’re Built, expanding it to six S’s and including this handy diagram:

Brand eventually adapted that into the pace layer framework at the top in his 2008 book The Clock of the Long Now: Time and Responsibility (the chapter on pace layers was edited and republished last year in MIT’s Journal of Design and Science). If you want more, here’s a great writeup from Eric Nehrlich on a conversation about pace layers between Brand and Paul Saffo. Nehrlich calls out this slide from the presentation, which is quite helpful for understanding how the layers work:

(The whole talk is posted at the Long Now Blog if you’re so inclined.)

The framework has been picked up and adapted by many, but one of the more notable versions for me comes from Gartner as a way to think about your enterprise software strategy. They break enterprise software into three “layers”:

– Systems of Record — Established packaged applications or legacy homegrown systems that support core transaction processing and manage the organization’s critical master data. The rate of change is low, because the processes are well-established and common to most organizations, and often are subject to regulatory requirements.
– Systems of Differentiation — Applications that enable unique company processes or industry-specific capabilities. They have a medium life cycle (one to three years), but need to be reconfigured frequently to accommodate changing business practices or customer requirements.
– Systems of Innovation — New applications that are built on an ad hoc basis to address new business requirements or opportunities. These are typically short life cycle projects (zero to 12 months) using departmental or outside resources and consumer-grade technologies.

Each layer has it’s own pace of change, lifetime, planning horizon, governance model, and many other unique differentiators:

All in all, the overarching shearing/pace layers framework (many layers which interact with each other and operate at different speeds) is something I’ve found useful in various spheres in addition to the society, architecture, and enterprise software examples above. Inside a company, for instance, you conduct various activities that exist in a similar set of layers ranging from long-term planning and brand building to quarterly goals or roadmaps to two week sprints to weekly exec meetings and then the daily work. It’s a useful way to spot where you’re overloaded with meetings (too many weekly check-ins, not enough monthly lookbacks) or understand where you’re falling down (not doing a good enough job translating the medium term to the long term).

Biliography

  • Brand, Stewart. The clock of the long now: Time and responsibility. Basic Books, 2008.
  • Brand, Stewart. How buildings learn: What happens after they’re built. Penguin, 1995.
  • Brand, S. (2018). Pace Layering: How Complex Systems Learn and Keep Learning. Journal of Design and Science. https://doi.org/10.21428/7f2e5f08
  • Duffy, Francis. “Measuring building performance.” Facilities 8.5 (1990): 17-20.
  • Gartner.com. (2012). Gartner Says Adopting a Pace-Layered Application Strategy Can Accelerate Innovation. [online] Available at: https://www.gartner.com/newsroom/id/1923014 [Accessed 28 Sep. 2018].
  • Mesaglio, Mary & Matthew Hotle. “Pace-Layered Application Strategy and IT Organizational Design: How to Structure the Application Team for Success.” Gartner, 2016.
  • Nehrlich, E. (2015). Stewart Brand and Paul Saffo at the Interval. [online] Nehrlich.com. Available at: http://www.nehrlich.com/blog/2015/02/11/stewart-brand-and-paul-saffo-at-the-interval/ [Accessed 28 Sep. 2018].

Framework of the Day posts:

August 21, 2018 // This post is about: , , , , , , , , ,