Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Information Transportation versus Transformation [Part 1]

Every year I take a trip out to Montana to teach at a weekend seminar series that’s part of the University of Montana’s Entertainment Management. I’m 11 years in and I work really hard to create original content for each year. This time around I talked about mental models, theories of communications and information, and a bit about machine learning. I wanted to try to take a bit of the content I shared there and repurpose it. As always, you can subscribe by email here.

The article I’ve shared more than any other this year is this Aeon piece by Jimmy Soni and Rob Goodman about Claude Shannon, the father of information theory. I knew basically nothing about information theory before reading this and have since consumed just about everything I could find on the topic. I wanted to talk a bit about why information theory fascinated me and also tie it to my broader interest in communications studies generally and McLuhan specifically.

Shannon and McLuhan were two of the most important thinkers of the 20th century. Without Shannon we’d have no computers and without McLuhan we wouldn’t examine the effects of media, communications, and technology on society with the urgency we do. With that said, they’re very different in their science and approach. Shannon was fundamentally a mathematician while McLuhan was a scholar of literature. In their work Shannon examined huge questions around how communications works technically, while McLuhan examined how it works tactically. When asked, McLuhan drew the distinction as questions of “transportation” versus “transformation”:

My kind of study of communication is really a study of transformation, whereas Information Theory and all the existing theories of communication I know of are theories of transportation… Information Theory … has nothing to do with the effects these forms have on you… So mine is a transformation theory: how people are changed by the instruments they employ.

I want to take some time to go through both, as they are fascinating in their own ways.

Transportation

Of course, information existed before Shannon, just as objects had inertia before Newton. But before Shannon, there was precious little sense of information as an idea, a measurable quantity, an object fitted out for hard science. Before Shannon, information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted into bits.

“The bit bomb: It took a polymath to pin down the true nature of ‘information’. His answer was both a revelation and a return”

The intellectual leaps Shannon made in his paper “A Mathematical Theory of Communications” were miraculous. What starts off as a question about how to reduce noise in the transmission of information turned into a complete theory of information that paved the way for the computing we all rely on. At the base of the whole thing is a recognition that information is probabilistic, which he explains in a kind of beautiful way. Here’s my best attempt to take you through his logic (which some extra explanation from me).

Let’s start by thinking about English for a second. If we wanted to create a list of random letters we could put the numbers 1-27 in a hat (alphabet + space) and pick out numbers one by one and then write down their letter equivalent. When Shannon did this he got:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD

But letters aren’t random at all. If you open a book up and counted all the letters you wouldn’t find 26 letters each occurring 3.8% of the time. On the contrary, letters occur probabilistically “e” occurs more often than “a,” and “a” occurs more often than “g,” which in turn occurs more often than “x.” Put it all together and it looks something like this:


So now imagine we put all our letters (and a space) in a hat. But instead of 1 letter each, we have 100 total tiles in the hat and they alight with the chart above: 13 tiles for “e”, 4 tiles for “d”, 1 tile for “v”. Here’s what Shannon got when he did this:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL

He called this “first-order approximation” and while it still doesn’t make much sense, it’s a lot less random than the first example.

What’s wrong with that last example is that letters don’t operate independently. Let’s play a game for a second. I’m going to say a letter and you guess the next one. If I say “T” the odds are most of you are going to say “H”. That makes lots of sense since “the” is the most popular word in the English language. So instead of just picking letters at random based on probability what Shannon did next is pick one letter and then match it with it’s probabilistic pair. These are called bigrams and just like we had letter frequencies, we can chart these out.

This time Shannon took a slightly different approach. Rather than loading up a bunch of bigrams in a hat and picking them out at random he turned to a random page in a book and choose a random letter. He then turned to another random page in the same book and found the first occurance of recorded the letter immediately after it. What came out starts to look a lot more like English:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE

Now I’m guessing you’re starting to see the pattern here. Next Shannon looked at trigrams, sets of three letters.

For his “third-order approximation” he once again uses the book but goes three letters deep:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE

He could go on and on and it would become closer and closer to English. Instead he switches to words, which also occur probabilistically.

For his “first-order approximation” he picks random words from the book. It looks a lot like a sentence because words don’t occur randomly. There’s a good chance an “and” will come after a word because “and” is likely the third most popular word in the book. Here’s what came out:

REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.

Second-order approximation works just like bigrams, but instead of letters it uses pairs of words.

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

As Shannon put it, “The resemblance to ordinary English text increases quite noticeably at each of the above steps.”

While all that’s cool, much of it was pretty well known at the time. Shannon had worked on cryptography during World War II and used many of these ideas to encrypt/decrypt messages. Where the leap came was how he used this to think about the quantity of information any message contains. He basically realized that the first example, with 27 random symbols (A-Z plus a space), carried with it much more information than his second- or third-order approximation, where subsequent letters were chosen based on their probabilities. That’s because there are fewer “choices” to be made as we introduce bigrams and trigrams, and “choices”, or lack-thereof, are the essence of information.

Khan Academy has a great video outlining how this works:

Here’s how MIT information theorist Robert Gallager explained the breakthrough:

Until then, communication wasn’t a unified science … There was one medium for voice transmission, another medium for radio, still others for data. Claude showed that all communication was fundamentally the same-and furthermore, that you could take any source and represent it by digital data.

But Shannon didn’t stop there, he goes on to show that all language has redundancy and it can be used to fight noise. The whole thing is pretty mind-blowing and, like I said, underpins all modern computing. (There’s a whole other theory about the relationship between information theory and creativity that I’ll save for another day.)

In part two I’ll dive into McLuhan and transformation … stay tuned (you can subscribe to the RSS feed or email for updates). Also, if you are an information theory expert and find I’ve misinterpreted something, please get in touch and let me know.

May 9, 2018 // This post is about: , , , , ,

The Gell-Mann Amnesia Effect, The New Yorker, and Pau Gasol

Yesterday I was reading an article about the first female assistance coach in the NBA in the latest issue of the New Yorker. The piece was moderately interesting and not particularly worth sharing here, except for one paragraph, and really just one sentence within it (emphasis mine):

Because of their success, the Spurs have not been eligible for the highest picks in the draft. Instead of relying on college superstars, they have built their team through some crafty trades and by pushing their young players to the limit. They scout top international players—like Parker, from France, and Manu Ginóbili, from Argentina—and sign N.B.A. veterans like Pau Gasol, from Spain, who is thirty-seven but can anchor a defense and move in a way that creates space on the floor; they also, as in the case of Leonard, hone the raw athletic talent of less experienced players. When the Spurs are at their best, the ball moves fluidly and freely. Duncan, who retired in 2016 and was perhaps the least flashy major star in the N.B.A., was emblematic of the team’s unselfish style. On a given night, almost anyone on the roster can be the leading scorer.

The whole thing seems relatively innocuous and is largely accurate: The Spurs success has been driven, at least in part, by incredibly successful drafting (Manu Ginóbili, a key player in their multi-championship run, was picked in the second round and is widely considered one of the best draft picks ever). With that said, though, Pau Gasol is most definitely not a defensive anchor. He’s a pretty good rebounder and he’s a giant, but he’s slow and famous almost entirely for his prowess on the offensive end of the court. In fact, earlier this season his coach, Gregg Popovich, said in a needling way that, “He likes offense better than defense.”1

Now obviously this is one tiny point in a giant article. But it happens to be an article about a subject I’m kind of obsessed with (the NBA) and that’s pretty rare for a magazine that covers a huge diversity of topics.

Which brings me to the title of this post: The Gell-Mann Amnesia Effect.

Named after famous physicist Murray Gell-Mann, the Amnesia Effect was coined by Jurassic Park author Michael Crichton to describe the act of feeling skeptical as you read a magazine or newspaper article about an area in which you have expertise and then completely forgetting that skepticism as you turn the page and read about something you know less about. If they could get it so wrong for one, why don’t we assume they could get it so wrong for all?

Here’s Michael Crichton explaining it:

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward — reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

The Gasol slip up put me in a funny state with my favorite magazine: What misportrayals, however small, do I take for granted when I read about topics like Catholicism or pharmaceuticals? What bothers me even more is that I feel some guilt even writing this. In this moment of conversations about fake news, questioning a publication that is unquestionably a beacon of extraordinary journalism (and fact checking!) feels like adding fuel to a fire that’s trying to burn down my house.

But reading with skepticism is something we should all do, not because we don’t trust the publication, but because its our responsibility to be media literate and develop our own points of view. The biggest problem I have with the conversation around fake news is that it makes it more difficult to have legitimately critique the media, something we should all be doing more often.

In the meantime I’m going to hope the New Yorker stays away from basketball …

1 Because I can’t go through this without making it clear: The Gasol thing is opinion and not fact and some might argue that the act of being a center makes you a defensive anchor. I spoke to a friend who is a Spurs fanatic and reinforced my raised eyebrow reaction with a “Umm hell no” to the label applied to Gasol.

April 15, 2018 // This post is about: , , , , ,

Why Videogames Tend Towards Post-Apocalyptic

I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 3.

On Wednesday night I had the honor of presenting some very cool work as part of Columbia University’s Digital Storytelling Lab’s Digital Dozen event. One of the pieces I presented was the video game What Remains of Edith Finch, which tells the story of a 17-year-old girl who returns to an inherited home and finds out the stories of her dead family.

In preparing to present I was reminded of a super interesting article by writer, game designer, professor, and all around smart guy Ian Bogost about Edith Finch and the general art of videogames and their obsession with out-filming film. The whole article is worth a read, but the bit about why first person shooters tend towards the post-apocalyptic is my favorite nugget:

In retrospect, it’s easy easy to blame old games like Doom and Duke Nukem for stimulating the fantasy of male adolescent power. But that choice was made less deliberately at the time. Real-time 3-D worlds are harder to create than it seems, especially on the relatively low-powered computers that first ran games like Doom in the early 1990s. It helped to empty them out as much as possible, with surfaces detailed by simple textures and objects kept to a minimum. In other words, the first 3-D games were designed to be empty so that they would run.

An empty space is most easily interpreted as one in which something went terribly wrong. Add a few monsters that a powerful player-dude can vanquish, and the first-person shooter is born. The lone, soldier-hero against the Nazis, or the hell spawn, or the aliens.

A perfect case of the medium being the message.

PS – If you’re receiving this as an email and wondering why everything is looking so much fancier, I moved it over to MailChimp. If you’re not already subscribed, you can sign up here.

April 5, 2018 // This post is about: , ,

Unanticipated Effects

I really like this story of the unanticipated effects of the printing press from Steven Johnson:

Once people started to read, and once books were in circulation, very quickly the population of Europe realized that they were farsighted. This is interestingly a problem that hadn’t occurred to people before because they didn’t have any opportunity to look at tiny letter forms on a page, or anything else that required being able to use your vision at that micro scale. All of a sudden there is a surge in demand for spectacles. Europe is awash in people who were tinkering with lenses, and because of their experimentation, they start to say, “Hey, wait. If we took these two lenses and put them together, we could make a telescope. And if we take these two lenses and put them together, we could make a microscope.” Almost immediately there is this extraordinary scientific revolution in terms of understanding and identifying the cell, and identifying the moons of Jupiter and all these different things that Galileo does. So the Gutenberg press ended up having this very strange effect on science that wasn’t about the content of the books being published.

As I’ve established here, I’m a big McLuhan fan, and this is pretty good evidence that the effect of the medium is often much more important than the specific message. 

December 22, 2014 // This post is about: , , , , , ,

On Context, Imagination, and STEM vs. ART

Go read this whole interview with John Seely Brown. It’s awesome. Here’s a few of my favorite bits.

On content versus content:

Remember that image of the statue of Saddam Hussein being pulled down? Well, the photo was actually cropped. Those were Americans pulling the statue down, not Iraqis. But the cropped photo reinforced this notion that the Iraqis loved us. It reshaped context. Milennials are much better at understanding that context shapes content. They play with this all the time when they remix something. It’s actually an ideal property for a 21st century citizen to have.

That’s as good an explanation of what McLuhan meant by the medium is the message as I’ve read.

On creativity versus imagination:

The real key is being able to imagine a new world. Once I imagine something new, then answering how to get from here to there involves steps of creativity. So I can be creative in solving today’s problems, but if I can’t imagine something new, than I’m stuck in the current situation.

I really like that distinction. Imagination is what drives vision, creativity is what drives execution. Both have huge amounts of value, but they’re different things.

On the dangers of a STEM-only world:

Right. That’s what we should be talking about. That’s one of the reasons I think what’s happening in STEM education is a tragedy. Art enables us to see the world in different ways. I’m riveted by how Picasso saw the world. How does being able to imagine and see things differently work hand-in-hand? Art education, and probably music too, are more important than most things we teach. Being great at math is not that critical for science, but being great at imagination and curiosity is critical. Yet how are we training tomorrow’s scientists? By boring the hell out of them in formulaic mathematics—and don’t forget I am trained as a theoretical mathematician.

Not to talk about McLuhan too much, but he also deeply believed in the value of art and artists as the visionaries for society. I think there’s obviously a lot of room here and the reality of the focus on STEM is that we have so far to go that it’s not like we’re going to wake up in a world where people only learn math and science. But I think the point is that the really interesting thoughts that come along are the ones that combine, not shockingly, the arts and sciences.

Again, just go read the whole interview. It’s great.

January 14, 2014 // This post is about: , , , , , ,

Reporting Technology

Consider this part of an early New Year’s resolution to blog more (I really am going to make a run at it in 2014). Anyway, over the holiday break I, along with many others I’m sure, was having a conversation about Healthcare.gov. I mostly mentioned all the stuff I wrote a few months ago (basically that the things that ruined the project seem to be all the regular stuff — scope creep, too many players — that ruins projects), but I also talked a bit about my disappointment with the media’s reporting of the story. Specifically, the inability to do any serious technical reporting.

The New York Times had the deepest reporting I read and that didn’t come close to actually explaining what went wrong. The story included laughable (to technologists) lines like this: “By mid-November, more than six weeks after the rollout, the MarkLogic database — essentially the website’s virtual filing cabinet and index — continued to perform below expectations, according to one person who works in the command center.” While I understand not everyone is familiar with a database, to call it a virtual filing cabinet and index only says to me that the author has absolutely no idea what a database is. 

The point isn’t to pick on the Times, though. Rather it’s just to point out that as technical stories continue to pile up (NSA and Healthcare.gov were amongst the biggest media focus areas of the last three months), we’re going to have to get better at technical reporting. That I still haven’t read a decent explanation of what went wrong technically seems, to me at least, as a major disservice and a dangerous signal for society’s ability to keep up with technical change.

December 28, 2013 // This post is about: , , , ,

Global Time

In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:

It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”

All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.

So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.

What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.

August 5, 2013 // This post is about: , , , ,

Borges and Sharknado

I really like this little post on “Borges and the Sharknado Problem.” The gist:

We can apply the Borgesian insight [why write a book when a short story is equally good for getting your point across] to the problem of Sharknado. Why make a two-hour movie called Sharknado when all you need is the idea of a movie called Sharknado? And perhaps, a two-minute trailer? And given that such a movie is not needed to convey the full brilliance of Sharknado – and it is, indeed, brilliant – why spend two hours watching it when it is, wastefully, made?

On Twitter my friend Ryan Catbird responded by pointing out that that’s what makes the Modern Seinfeld Twitter account so magical: They give you the plot in 140 characters and you can easily imagine the episode (and that’s really all you need).

July 30, 2013 // This post is about: , , , ,

On Sponsored and Scalable Brand Content

This morning I woke up to this Tweet from my friend Nick:

It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.

Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.

Second issue came in a paragraph about the scalability of content for brands:

Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.

Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.

Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.

July 29, 2013 // This post is about: , , , , ,

Technology Still Isn’t Ruining Anything

This is three years old, but I just ran across it and it’s just as relevant today as it was then. Apparently in response to Nicholas Carr’s book The Shallows, Steven Pinker wrote a great op-ed about how technology isn’t really ruining all that stuff that technology is constantly claimed to be ruining. A snippet:

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

I try to post stuff like this whenever I see it because these sorts of arguments (the one from Carr that media is ruining our brains) drive me totally insane. Pinker, it’s clear, is someone Adam Gopnik would call an ever-wasser: “The Ever-Wasers insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others–that something like this is going on is exactly what makes it a modern moment.”

Also, while we’re on the topic, XKCD had a pretty awesome comic taking down those stricken with nostalgia for a techless world.

July 15, 2013 // This post is about: , , ,