Welcome to the home of Noah Brier. I'm the co-founder of Variance and general internet tinkerer. Most of my writing these days is happening over at Why is this interesting?, a daily email full of interesting stuff. This site has been around since 2004. Feel free to get in touch. Good places to get started are my Framework of the Day posts or my favorite books and podcasts. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Remainders: From Kanye to El Paquete

Quick update before jumping in. I was in Missoula, Montana for the 11th year in a row last week to do some fishing and teaching at University of Montana. If you find yourself in Montana and are looking for a fly fishing guide I can’t recommend Chris Stroup and his Montana Cutthroat Guide Services enough. My friend Nick and I spent two days on the river with Chris and, once again, he put us on fish with nearly every cast. On the reading side, I finished up the Master Algorithm (fascinating content, dense book) and am on to Play Bigger about category building in marketing. In-between there I also read the short book Probability: A Very Short Introduction (if you’re not familiar with the Very Short Introduction series, The New Yorker had a good piece on it). Travel-wise, I’m in NYC for a two whole weeks before I have to get on another airplane. Happy Mother’s Day to all the moms out there, especially my amazing wife Leila and my mom Barbara, who are hopefully both reading this.

If you don’t know the drill, this is everything I’ve read and found interesting over the last week (in this case two). Previous editions can be found filed under remainders and you can subscribe by email to all my posts. Now onto the links.

My writing this week: A post about information theory and a piece over at the Percolate blog about content bottlenecks.

Any time Ta-Nehisi Coates writes something it’s worth reading. Here he is on Kanye:

West calls his struggle the right to be a “free thinker,” and he is, indeed, championing a kind of freedom—a white freedom, freedom without consequence, freedom without criticism, freedom to be proud and ignorant; freedom to profit off a people in one moment and abandon them in the next; a Stand Your Ground freedom, freedom without responsibility, without hard memory; a Monticello without slavery, a Confederate freedom, the freedom of John C. Calhoun, not the freedom of Harriet Tubman, which calls you to risk your own; not the freedom of Nat Turner, which calls you to give even more, but a conqueror’s freedom, freedom of the strong built on antipathy or indifference to the weak, the freedom of rape buttons, pussy grabbers, and fuck you anyway, bitch; freedom of oil and invisible wars, the freedom of suburbs drawn with red lines, the white freedom of Calabasas.

Everything you wanted to know about why the US chills its eggs and most of the rest of the world doesn’t. Turns out it’s because we choose to wash the gunk (aka chicken poo) off our eggs. “Soon after eggs pop out of the chicken, American producers put them straight to a machine that shampoos them with soap and hot water. The steamy shower leaves the shells squeaky clean. But it also compromises them, by washing away a barely visible sheen that naturally envelops each egg.”

This hits close to home: Your coffee addiction, by decade. “‘No sugar,’ you declare. ‘I take it black.’ Shoot a side-eyed glance at that kid over there with his blended-ice drink—amateur hour. Sorry they don’t serve Shirley Temples, geez.”

A theory about North Korea and why it won’t give up its nukes that I’ve seen a few times, this one from Nicholas Kristof: “On my last visit to North Korea, in September, a Foreign Ministry official told me that Libya had given up its nuclear program — only to have its regime toppled. Likewise, he noted, Saddam Hussein’s Iraq lacked a nuclear deterrent — so Saddam was ousted by America. North Korea would not make the same mistake, he insisted.”

Every time I watched Ben Simmons in the playoffs I was reminded of this excellent SB Nation video about how Giannis Antetokounmpo dominates without being able to shoot. And while we’re on the NBA, the league has partnered with the video game 2K to create an eSports league and Zach Lowe got an exclusive to review the court designs.

If you have a baby and have practiced “The 5 S’s” you’ll appreciate this New York Times Mag profile of Dr. Harvey Karp.

On the podcast front, I’ve been enjoying Real Famous, which features interviews with ad people (many of whom are my friends). Paul Feldwick, author of the awesome book Anatomy of a Humbug, is an excellent listen.

I was reminded of this Atlantic article from last year on the intellectual history of computing.

An argument against multi-tasking:

Multitasking, in short, is not only not thinking, it impairs your ability to think. Thinking means concentrating on one thing long enough to develop an idea about it. Not learning other people’s ideas, or memorizing a body of information, however much those may sometimes be useful. Developing your own ideas. In short, thinking for yourself. You simply cannot do that in bursts of 20 seconds at a time, constantly interrupted by Facebook messages or Twitter tweets, or fiddling with your iPod, or watching something on YouTube.

I discovered Andrew McLuhan (Marshall’s grandson) on Medium. He’s got some good stuff (plus it makes me feel slightly better about my own struggles to understand McLuhan that his own grandson is still working through it). Here’s two of his pieces: “This post is a juicy piece of meat.” and Configuring Ground (for kids!).

Read a bunch of stuff about incels after the Robin Hason article. This n+1 pieces is the best of the bunch. It spends a lot of time talking about Elliot Rodger, who was responsible for a series of killings near University of California’s Santa Barbara campus in 2014, and has since become a kind of saint to the incels (which, in case you haven’t read about them before, is a group of young men who consider themselves “involuntarily celibate” and blame women and society for that fact). Here’s one of many strong paragraphs:

You could say the trouble for Rodger started when, around puberty, he began to know—and, in writing, recite—the first and last names of every boy he considered a sexual competitor, while at the same time referring to girls almost always collectively. Girls. Pretty girls. Pretty blond girls. Only three girls (or perhaps, by this time, women) are listed by name in My Twisted World, vis-a-vis dozens of boys (I’m not including family members). By the end of his writing and life, he’s failed to distinguish between any groups of humans at all, to the point where he considers his 6-year-old brother yet another budding Romeo who, because “he will grow up enjoying the life [Rodger has] craved for,” must die. “Girls will love him,” Rodger says. “He will become one of my enemies.” Rodger begs our most individuating question—“why don’t you love me?”—by proving himself repeatedly unable to individuate another. In erotic coupling, the ego finds relief in its equal. But had Elliot Rodger ever found his equal and opposite in another human being, he would, by all indications, have been repulsed. Reading him, I kept remembering Rooney Mara’s kiss-off in The Social Network: “You are going to go through life thinking that girls don’t like you because you’re a nerd.1 [Or short. Or half-Asian. Or bad at football, or not a real ladies’ man, or somehow else disappointing to the ur-dads of America.] And I want you to know, from the bottom of my heart, that isn’t true. It’ll be because you’re an asshole.”

I re-read this excellent piece on “El Paquete” the peer-to-peer media network that operates in Cuba from the same friend Nick I fished with in Missoula.

He also turned me onto this Nautilus piece about learning math as an adult. This bit on chunking stood out:

Chunking was originally conceptualized in the groundbreaking work of Herbert Simon in his analysis of chess—chunks were envisioned as the varying neural counterparts of different chess patterns. Gradually, neuroscientists came to realize that experts such as chess grand masters are experts because they have stored thousands of chunks of knowledge about their area of expertise in their long-term memory. Chess masters, for example, can recall tens of thousands of different chess patterns. Whatever the discipline, experts can call up to consciousness one or several of these well-knit-together, chunked neural subroutines to analyze and react to a new learning situation. This level of true understanding, and ability to use that understanding in new situations, comes only with the kind of rigor and familiarity that repetition, memorization, and practice can foster.

I can’t get enough stories about people cheating the lottery. This one is from the New York Times Magazine. Earlier in the year Huffington Post published “The Lottery Hackers” if you’re into the genre. This nugget from the NYT Mag story about how the lottery generates a random number was pretty interesting:

The computer takes a reading from a Geiger counter that measures radiation in the surrounding air, specifically the radioactive isotope Americium-241. The reading is expressed as a long number of code; that number gives the generator its true randomness. The random number is called the seed, and the seed is plugged into the algorithm, a pseudorandom number generator called the Mersenne Twister. At the end, the computer spits out the winning lottery numbers.

I don’t totally understand what this is, but it’s very cool.

Here’s James Gleick on quantum physics.

The New Yorker reviewed books about Hitler.

If you haven’t heard the Google Duplex calls, go have a listen. Some interesting comments from Twitter:

  • Jessi Hempel: “Reading about Google’s Duplex: Design is a series of choices, and creating voice tech designed to let humans trick other humans is a choice humans are making, not an inevitable consequence of technology’s evolution.”
  • Stewart Brand: “This sounds right. The synthetic voice of synthetic intelligence should sound synthetic. Successful spoofing of any kind destroys trust. When trust is gone, what remains becomes vicious fast.”

Before his iconic rainbow NYC subway ads, Dr. Zizmor wrote a terrible book about caring for your skin.

I never thought to look up what lorem ipsum meant, but my friend Tim did.

Last, but not least, a very good piece from n+1 on the relationship between TV & culture that takes a bunch of different turns. This bit on the Weinstein reporting was particularly interesting to me:

The New York Times’s Weinstein report was a believability project years in the making: it systematized abuse, turned it into a pattern your eye could follow. There were interviews, emails, audio recordings, legal documents; facts were double- and triple-checked. But its paradoxical consequence was to set the bar far too high for every subsequent story whose breaking it had made possible. What’s a little masturbation between friends when the king of Hollywood kingmakers had employed former agents of the Israel Defense Forces to silence his accusers? In one final act of gaslighting, Weinstein made all other abuse look not so bad and all other evidence look not so good.

That’s it for this week. As always, let me know if I missed anything and don’t forget to subscribe. Have a great weekend.

May 11, 2018 // This post is about: , , , , , , , , , , , , , , , , , , , , , , , ,

Information Transportation versus Transformation [Part 1]

Every year I take a trip out to Montana to teach at a weekend seminar series that’s part of the University of Montana’s Entertainment Management. I’m 11 years in and I work really hard to create original content for each year. This time around I talked about mental models, theories of communications and information, and a bit about machine learning. I wanted to try to take a bit of the content I shared there and repurpose it. As always, you can subscribe by email here.

The article I’ve shared more than any other this year is this Aeon piece by Jimmy Soni and Rob Goodman about Claude Shannon, the father of information theory. I knew basically nothing about information theory before reading this and have since consumed just about everything I could find on the topic. I wanted to talk a bit about why information theory fascinated me and also tie it to my broader interest in communications studies generally and McLuhan specifically.

Shannon and McLuhan were two of the most important thinkers of the 20th century. Without Shannon we’d have no computers and without McLuhan we wouldn’t examine the effects of media, communications, and technology on society with the urgency we do. With that said, they’re very different in their science and approach. Shannon was fundamentally a mathematician while McLuhan was a scholar of literature. In their work Shannon examined huge questions around how communications works technically, while McLuhan examined how it works tactically. When asked, McLuhan drew the distinction as questions of “transportation” versus “transformation”:

My kind of study of communication is really a study of transformation, whereas Information Theory and all the existing theories of communication I know of are theories of transportation… Information Theory … has nothing to do with the effects these forms have on you… So mine is a transformation theory: how people are changed by the instruments they employ.

I want to take some time to go through both, as they are fascinating in their own ways.

Transportation

Of course, information existed before Shannon, just as objects had inertia before Newton. But before Shannon, there was precious little sense of information as an idea, a measurable quantity, an object fitted out for hard science. Before Shannon, information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted into bits.

“The bit bomb: It took a polymath to pin down the true nature of ‘information’. His answer was both a revelation and a return”

The intellectual leaps Shannon made in his paper “A Mathematical Theory of Communications” were miraculous. What starts off as a question about how to reduce noise in the transmission of information turned into a complete theory of information that paved the way for the computing we all rely on. At the base of the whole thing is a recognition that information is probabilistic, which he explains in a kind of beautiful way. Here’s my best attempt to take you through his logic (which some extra explanation from me).

Let’s start by thinking about English for a second. If we wanted to create a list of random letters we could put the numbers 1-27 in a hat (alphabet + space) and pick out numbers one by one and then write down their letter equivalent. When Shannon did this he got:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD

But letters aren’t random at all. If you open a book up and counted all the letters you wouldn’t find 26 letters each occurring 3.8% of the time. On the contrary, letters occur probabilistically “e” occurs more often than “a,” and “a” occurs more often than “g,” which in turn occurs more often than “x.” Put it all together and it looks something like this:


So now imagine we put all our letters (and a space) in a hat. But instead of 1 letter each, we have 100 total tiles in the hat and they alight with the chart above: 13 tiles for “e”, 4 tiles for “d”, 1 tile for “v”. Here’s what Shannon got when he did this:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL

He called this “first-order approximation” and while it still doesn’t make much sense, it’s a lot less random than the first example.

What’s wrong with that last example is that letters don’t operate independently. Let’s play a game for a second. I’m going to say a letter and you guess the next one. If I say “T” the odds are most of you are going to say “H”. That makes lots of sense since “the” is the most popular word in the English language. So instead of just picking letters at random based on probability what Shannon did next is pick one letter and then match it with it’s probabilistic pair. These are called bigrams and just like we had letter frequencies, we can chart these out.

This time Shannon took a slightly different approach. Rather than loading up a bunch of bigrams in a hat and picking them out at random he turned to a random page in a book and choose a random letter. He then turned to another random page in the same book and found the first occurance of recorded the letter immediately after it. What came out starts to look a lot more like English:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE

Now I’m guessing you’re starting to see the pattern here. Next Shannon looked at trigrams, sets of three letters.

For his “third-order approximation” he once again uses the book but goes three letters deep:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE

He could go on and on and it would become closer and closer to English. Instead he switches to words, which also occur probabilistically.

For his “first-order approximation” he picks random words from the book. It looks a lot like a sentence because words don’t occur randomly. There’s a good chance an “and” will come after a word because “and” is likely the third most popular word in the book. Here’s what came out:

REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.

Second-order approximation works just like bigrams, but instead of letters it uses pairs of words.

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

As Shannon put it, “The resemblance to ordinary English text increases quite noticeably at each of the above steps.”

While all that’s cool, much of it was pretty well known at the time. Shannon had worked on cryptography during World War II and used many of these ideas to encrypt/decrypt messages. Where the leap came was how he used this to think about the quantity of information any message contains. He basically realized that the first example, with 27 random symbols (A-Z plus a space), carried with it much more information than his second- or third-order approximation, where subsequent letters were chosen based on their probabilities. That’s because there are fewer “choices” to be made as we introduce bigrams and trigrams, and “choices”, or lack-thereof, are the essence of information.

Khan Academy has a great video outlining how this works:

Here’s how MIT information theorist Robert Gallager explained the breakthrough:

Until then, communication wasn’t a unified science … There was one medium for voice transmission, another medium for radio, still others for data. Claude showed that all communication was fundamentally the same-and furthermore, that you could take any source and represent it by digital data.

But Shannon didn’t stop there, he goes on to show that all language has redundancy and it can be used to fight noise. The whole thing is pretty mind-blowing and, like I said, underpins all modern computing. (There’s a whole other theory about the relationship between information theory and creativity that I’ll save for another day.)

In part two I’ll dive into McLuhan and transformation … stay tuned (you can subscribe to the RSS feed or email for updates). Also, if you are an information theory expert and find I’ve misinterpreted something, please get in touch and let me know.

May 9, 2018 // This post is about: , , , , ,