Welcome to the home of Noah Brier. I'm the co-founder of Variance and general internet tinkerer. Most of my writing these days is happening over at Why is this interesting?, a daily email full of interesting stuff. This site has been around since 2004. Feel free to get in touch. Good places to get started are my Framework of the Day posts or my favorite books and podcasts. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Information Transportation versus Transformation [Part 1]

Every year I take a trip out to Montana to teach at a weekend seminar series that’s part of the University of Montana’s Entertainment Management. I’m 11 years in and I work really hard to create original content for each year. This time around I talked about mental models, theories of communications and information, and a bit about machine learning. I wanted to try to take a bit of the content I shared there and repurpose it. As always, you can subscribe by email here.

The article I’ve shared more than any other this year is this Aeon piece by Jimmy Soni and Rob Goodman about Claude Shannon, the father of information theory. I knew basically nothing about information theory before reading this and have since consumed just about everything I could find on the topic. I wanted to talk a bit about why information theory fascinated me and also tie it to my broader interest in communications studies generally and McLuhan specifically.

Shannon and McLuhan were two of the most important thinkers of the 20th century. Without Shannon we’d have no computers and without McLuhan we wouldn’t examine the effects of media, communications, and technology on society with the urgency we do. With that said, they’re very different in their science and approach. Shannon was fundamentally a mathematician while McLuhan was a scholar of literature. In their work Shannon examined huge questions around how communications works technically, while McLuhan examined how it works tactically. When asked, McLuhan drew the distinction as questions of “transportation” versus “transformation”:

My kind of study of communication is really a study of transformation, whereas Information Theory and all the existing theories of communication I know of are theories of transportation… Information Theory … has nothing to do with the effects these forms have on you… So mine is a transformation theory: how people are changed by the instruments they employ.

I want to take some time to go through both, as they are fascinating in their own ways.

Transportation

Of course, information existed before Shannon, just as objects had inertia before Newton. But before Shannon, there was precious little sense of information as an idea, a measurable quantity, an object fitted out for hard science. Before Shannon, information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted into bits.

“The bit bomb: It took a polymath to pin down the true nature of ‘information’. His answer was both a revelation and a return”

The intellectual leaps Shannon made in his paper “A Mathematical Theory of Communications” were miraculous. What starts off as a question about how to reduce noise in the transmission of information turned into a complete theory of information that paved the way for the computing we all rely on. At the base of the whole thing is a recognition that information is probabilistic, which he explains in a kind of beautiful way. Here’s my best attempt to take you through his logic (which some extra explanation from me).

Let’s start by thinking about English for a second. If we wanted to create a list of random letters we could put the numbers 1-27 in a hat (alphabet + space) and pick out numbers one by one and then write down their letter equivalent. When Shannon did this he got:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD

But letters aren’t random at all. If you open a book up and counted all the letters you wouldn’t find 26 letters each occurring 3.8% of the time. On the contrary, letters occur probabilistically “e” occurs more often than “a,” and “a” occurs more often than “g,” which in turn occurs more often than “x.” Put it all together and it looks something like this:


So now imagine we put all our letters (and a space) in a hat. But instead of 1 letter each, we have 100 total tiles in the hat and they alight with the chart above: 13 tiles for “e”, 4 tiles for “d”, 1 tile for “v”. Here’s what Shannon got when he did this:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL

He called this “first-order approximation” and while it still doesn’t make much sense, it’s a lot less random than the first example.

What’s wrong with that last example is that letters don’t operate independently. Let’s play a game for a second. I’m going to say a letter and you guess the next one. If I say “T” the odds are most of you are going to say “H”. That makes lots of sense since “the” is the most popular word in the English language. So instead of just picking letters at random based on probability what Shannon did next is pick one letter and then match it with it’s probabilistic pair. These are called bigrams and just like we had letter frequencies, we can chart these out.

This time Shannon took a slightly different approach. Rather than loading up a bunch of bigrams in a hat and picking them out at random he turned to a random page in a book and choose a random letter. He then turned to another random page in the same book and found the first occurance of recorded the letter immediately after it. What came out starts to look a lot more like English:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE

Now I’m guessing you’re starting to see the pattern here. Next Shannon looked at trigrams, sets of three letters.

For his “third-order approximation” he once again uses the book but goes three letters deep:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE

He could go on and on and it would become closer and closer to English. Instead he switches to words, which also occur probabilistically.

For his “first-order approximation” he picks random words from the book. It looks a lot like a sentence because words don’t occur randomly. There’s a good chance an “and” will come after a word because “and” is likely the third most popular word in the book. Here’s what came out:

REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.

Second-order approximation works just like bigrams, but instead of letters it uses pairs of words.

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

As Shannon put it, “The resemblance to ordinary English text increases quite noticeably at each of the above steps.”

While all that’s cool, much of it was pretty well known at the time. Shannon had worked on cryptography during World War II and used many of these ideas to encrypt/decrypt messages. Where the leap came was how he used this to think about the quantity of information any message contains. He basically realized that the first example, with 27 random symbols (A-Z plus a space), carried with it much more information than his second- or third-order approximation, where subsequent letters were chosen based on their probabilities. That’s because there are fewer “choices” to be made as we introduce bigrams and trigrams, and “choices”, or lack-thereof, are the essence of information.

Khan Academy has a great video outlining how this works:

Here’s how MIT information theorist Robert Gallager explained the breakthrough:

Until then, communication wasn’t a unified science … There was one medium for voice transmission, another medium for radio, still others for data. Claude showed that all communication was fundamentally the same-and furthermore, that you could take any source and represent it by digital data.

But Shannon didn’t stop there, he goes on to show that all language has redundancy and it can be used to fight noise. The whole thing is pretty mind-blowing and, like I said, underpins all modern computing. (There’s a whole other theory about the relationship between information theory and creativity that I’ll save for another day.)

In part two I’ll dive into McLuhan and transformation … stay tuned (you can subscribe to the RSS feed or email for updates). Also, if you are an information theory expert and find I’ve misinterpreted something, please get in touch and let me know.

May 9, 2018 // This post is about: , , , , ,

The Gell-Mann Amnesia Effect, The New Yorker, and Pau Gasol

Yesterday I was reading an article about the first female assistant coach in the NBA in the latest issue of the New Yorker. The piece was moderately interesting and not particularly worth sharing here, except for one paragraph, and really just one sentence within it (emphasis mine):

Because of their success, the Spurs have not been eligible for the highest picks in the draft. Instead of relying on college superstars, they have built their team through some crafty trades and by pushing their young players to the limit. They scout top international players—like Parker, from France, and Manu Ginóbili, from Argentina—and sign N.B.A. veterans like Pau Gasol, from Spain, who is thirty-seven but can anchor a defense and move in a way that creates space on the floor; they also, as in the case of Leonard, hone the raw athletic talent of less experienced players. When the Spurs are at their best, the ball moves fluidly and freely. Duncan, who retired in 2016 and was perhaps the least flashy major star in the N.B.A., was emblematic of the team’s unselfish style. On a given night, almost anyone on the roster can be the leading scorer.

The whole thing seems relatively innocuous and is largely accurate: The Spurs success has been driven, at least in part, by incredibly successful drafting (Manu Ginóbili, a key player in their multi-championship run, was picked in the second round and is widely considered one of the best draft picks ever). With that said, though, Pau Gasol is most definitely not a defensive anchor. He’s a pretty good rebounder and he’s a giant, but he’s slow and famous almost entirely for his prowess on the offensive end of the court. In fact, earlier this season his coach, Gregg Popovich, said in a needling way that, “He likes offense better than defense.”1

Now obviously this is one tiny point in a giant article. But it happens to be an article about a subject I’m kind of obsessed with (the NBA) and that’s pretty rare for a magazine that covers a huge diversity of topics.

Which brings me to the title of this post: The Gell-Mann Amnesia Effect.

Named after famous physicist Murray Gell-Mann, the Amnesia Effect was coined by Jurassic Park author Michael Crichton to describe the act of feeling skeptical as you read a magazine or newspaper article about an area in which you have expertise and then completely forgetting that skepticism as you turn the page and read about something you know less about. If they could get it so wrong for one, why don’t we assume they could get it so wrong for all?

Here’s Michael Crichton explaining it:

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward — reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

The Gasol slip up put me in a funny state with my favorite magazine: What misportrayals, however small, do I take for granted when I read about topics like Catholicism or pharmaceuticals? What bothers me even more is that I feel some guilt even writing this. In this moment of conversations about fake news, questioning a publication that is unquestionably a beacon of extraordinary journalism (and fact-checking!) feels like adding fuel to a fire that’s trying to burn down my house.

But reading with skepticism is something we should all do, not because we don’t trust the publication, but because it’s our responsibility to be media literate and develop our own points of view. The biggest problem I have with the conversation around fake news is that it makes it more difficult to legitimately critique the media, something we should all be doing more often.

In the meantime, I’m going to hope the New Yorker stays away from basketball …

1 Because I can’t go through this without making it clear: The Gasol thing is an opinion and not fact and some might argue that the act of being a center makes you a defensive anchor. I spoke to a friend who is a Spurs fanatic and reinforced my raised eyebrow reaction with a “Umm hell no” to the label applied to Gasol.

April 15, 2018 // This post is about: , , , , ,

Why Videogames Tend Towards Post-Apocalyptic

I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 3.

On Wednesday night I had the honor of presenting some very cool work as part of Columbia University’s Digital Storytelling Lab’s Digital Dozen event. One of the pieces I presented was the video game What Remains of Edith Finch, which tells the story of a 17-year-old girl who returns to an inherited home and finds out the stories of her dead family.

In preparing to present I was reminded of a super interesting article by writer, game designer, professor, and all around smart guy Ian Bogost about Edith Finch and the general art of videogames and their obsession with out-filming film. The whole article is worth a read, but the bit about why first person shooters tend towards the post-apocalyptic is my favorite nugget:

In retrospect, it’s easy easy to blame old games like Doom and Duke Nukem for stimulating the fantasy of male adolescent power. But that choice was made less deliberately at the time. Real-time 3-D worlds are harder to create than it seems, especially on the relatively low-powered computers that first ran games like Doom in the early 1990s. It helped to empty them out as much as possible, with surfaces detailed by simple textures and objects kept to a minimum. In other words, the first 3-D games were designed to be empty so that they would run.

An empty space is most easily interpreted as one in which something went terribly wrong. Add a few monsters that a powerful player-dude can vanquish, and the first-person shooter is born. The lone, soldier-hero against the Nazis, or the hell spawn, or the aliens.

A perfect case of the medium being the message.

PS – If you’re receiving this as an email and wondering why everything is looking so much fancier, I moved it over to MailChimp. If you’re not already subscribed, you can sign up here.

April 5, 2018 // This post is about: , ,

Unanticipated Effects

I really like this story of the unanticipated effects of the printing press from Steven Johnson:

Once people started to read, and once books were in circulation, very quickly the population of Europe realized that they were farsighted. This is interestingly a problem that hadn’t occurred to people before because they didn’t have any opportunity to look at tiny letter forms on a page, or anything else that required being able to use your vision at that micro scale. All of a sudden there is a surge in demand for spectacles. Europe is awash in people who were tinkering with lenses, and because of their experimentation, they start to say, “Hey, wait. If we took these two lenses and put them together, we could make a telescope. And if we take these two lenses and put them together, we could make a microscope.” Almost immediately there is this extraordinary scientific revolution in terms of understanding and identifying the cell, and identifying the moons of Jupiter and all these different things that Galileo does. So the Gutenberg press ended up having this very strange effect on science that wasn’t about the content of the books being published.

As I’ve established here, I’m a big McLuhan fan, and this is pretty good evidence that the effect of the medium is often much more important than the specific message. 

December 22, 2014 // This post is about: , , , , , ,

On Context, Imagination, and STEM vs. ART

Go read this whole interview with John Seely Brown. It’s awesome. Here’s a few of my favorite bits.

On content versus content:

Remember that image of the statue of Saddam Hussein being pulled down? Well, the photo was actually cropped. Those were Americans pulling the statue down, not Iraqis. But the cropped photo reinforced this notion that the Iraqis loved us. It reshaped context. Milennials are much better at understanding that context shapes content. They play with this all the time when they remix something. It’s actually an ideal property for a 21st century citizen to have.

That’s as good an explanation of what McLuhan meant by the medium is the message as I’ve read.

On creativity versus imagination:

The real key is being able to imagine a new world. Once I imagine something new, then answering how to get from here to there involves steps of creativity. So I can be creative in solving today’s problems, but if I can’t imagine something new, than I’m stuck in the current situation.

I really like that distinction. Imagination is what drives vision, creativity is what drives execution. Both have huge amounts of value, but they’re different things.

On the dangers of a STEM-only world:

Right. That’s what we should be talking about. That’s one of the reasons I think what’s happening in STEM education is a tragedy. Art enables us to see the world in different ways. I’m riveted by how Picasso saw the world. How does being able to imagine and see things differently work hand-in-hand? Art education, and probably music too, are more important than most things we teach. Being great at math is not that critical for science, but being great at imagination and curiosity is critical. Yet how are we training tomorrow’s scientists? By boring the hell out of them in formulaic mathematics—and don’t forget I am trained as a theoretical mathematician.

Not to talk about McLuhan too much, but he also deeply believed in the value of art and artists as the visionaries for society. I think there’s obviously a lot of room here and the reality of the focus on STEM is that we have so far to go that it’s not like we’re going to wake up in a world where people only learn math and science. But I think the point is that the really interesting thoughts that come along are the ones that combine, not shockingly, the arts and sciences.

Again, just go read the whole interview. It’s great.

January 14, 2014 // This post is about: , , , , , ,