I’ve written in the past about how a big part of what separated McLuhan from the rest of the pack was his ability to separate his morals from his observations. Well, I particularly liked this explanation of McLuhan’s approach from the introduction to the newest edition of The Gutenberg Galaxy: “We have to remember that Marshall McLuhan portrayed himself as an explorer and not as an explainer of media environments.”
Although I must admit I’ve never actually made it all the way through a David McCullough book, I really enjoyed this interview with him and particularly this explanation of his writing process (with a typewriter):
I love putting paper in. I love the way the keys come up and actually print the letters. I love it when I swing that carriage and the bell rings like an old trolley car. I love the feeling of making something with my hands. People say, But with a computer you could go so much faster. Well, I don’t want to go faster. If anything, I should go slower. I don’t think all that fast. They say, But you could change things so readily. I can change things very readily as it is. I take a pen and draw a circle around what I want to move up or down or wherever and then I retype it. Then they say, But you wouldn’t have to retype it. But when I’m retyping I’m also rewriting. And I’m listening, hearing what I’ve written. Writing should be done for the ear. Rosalee reads aloud wonderfully and it’s a tremendous help to me to hear her speak what I’ve written. Or sometimes I read it to her. It’s so important. You hear things that are wrong, that call for editing.
Makes me want to buy a typewriter.
In this essay about McLuhan’s Gutenberg Galaxy is a pretty good summation of his approach to media theorizing:
While book-lovers sometimes deride the blog/tweet/Facebook post/text message/YouTube video/surfing/gaming/Skyping world we’ve created, I don’t think proclaiming it right or wrong, or better or worse, is useful. I prefer McLuhan’s approach which is simply to ask: how far has new media seeped into popular consciousness?
I’ve been thinking about big data lately. Mostly I’ve been trying to articulate why it’s a big deal, which I know, but isn’t often put succinctly. Recently I had a thought that the reason it’s such a big deal is because it means we can move away from using samples to infer to using actuals to understand. That seems really obvious, but it wasn’t a connection I had made before (though I sort of think it was obvious to everyone else).
Anyway, on the big data tip, I found this Wired piece on “long data” pretty interesting (even though I thought I was going to hate it based on the title). The gist:
By “long” data, I mean datasets that have massive historical sweep — taking you from the dawn of civilization to the present day. The kinds of datasets you see in Michael Kremer’s “Population growth and technological change: one million BC to 1990,” which provides an economic model tied to the world’s population data for a million years; or in Tertius Chandler’s Four Thousand Years of Urban Growth, which contains an exhaustive dataset of city populations over millennia. These datasets can humble us and inspire wonder, but they also hold tremendous potential for learning about ourselves.
Not the most in-depth piece in the world, but I like new concepts, and this is one.
There’s something magical about the first few moments of a new medium, as people experiment and try to figure out what it’s all about. It’s a period of uncertainty as a small group of people fumble with new technology and it’s fun to watch. Go back and read early Tweets or look at early Instagram photos and you get the equivalent of tapping the mic to see if it’s on.
I say this because I stumbled onto Vinepeek this morning, which shows a continuous stream of new Vines from Twitter. (For the uninitiated Vine is a new product Twitter announced that lets people make 6-second looping videos.) Watching Vinepeek, I got to thinking that there was something really fascinating with combining a new technology people are getting acquainted to with an API that people can make experimental outputs of. It’s like letting people play with the input and the output at the same time, and in the case of Vinepeek you get a very odd thing that feels like a little TV network that peeks into people’s lives.
I’m sure it won’t be interesting in a few days, but there’s a real magic to combining experimentation on creation and distribution at the exact same time.
We’re doing a little conferencey thing on Monday in NYC for community managers and I just wanted to take the change to invite some of you. We don’t have a ton of spots left, but if you’re interested drop me an email (noah at percolate).
To celebrate #CMGRs, Percolate is hosting a small invite-only event expanding on our SPEAKEASY Happy Hours called SPEAKEASY #CMAD. It’ll be a day full of learnings from brands, agencies and platforms including incredible speakers from LinkedIn, Denny’s, Getty Images, GE, MasterCard, Tumblr, IPG Media Labs and American Express.
I know I’ve mentioned this in the past, but I really love to read or listen to people who know lots about something (like music) describe why someone within that discipline (like a band) is good. Reading this little retrospective on Nirvana and Smells Like Teen Spirit feels a little like that. An excerpt:
Nirvana had the right song at the right time. Guys like these guitar store guys had seemingly never heard anything quite like it. They had been listening to classic rock, likely some Pat Metheney, Leo Kotke, and prog rock, no doubt. They probably listened at one point or another to some Clash and Ramones. But this song represented something significantly different for them. It astonished them like a shiny object. They hit repeat again.
I liked Nirvana, but I didn’t know why. (Or I did know why: Because MTV told me it was “good”.) You hear a lot of people say they were overrated or underrated or rated just right, but they seldom give any historical context for why they mattered at the time other than some weird cultural Gen X explanation. I’m not sure how true this explanation is, but it comes from a very interesting place.
Over at The Awl, Choire Sicha has an interesting little piece on how headlines have changed in the last few years. The gist:
Here’s a flashback. In 2007, a popular video of a baby getting dropkicked by a breakdancer (hard to believe I just typed that) was headlined “Times Square Still Extremely Unsafe for Children” on Gawker, which is pretty so-so as a headline but still funny. There’s no way that would get that headline now. (“Breakdancing in Times Square – Baby goes flying!” was the YouTube video headline.) “Watch This Baby Get Drop-Kicked By a Subway Breakdancer” is what I’d predict for our age. You have to really tell the folks on Twitter what’s happening for your clicks ‘n’ shares, you see.
It’s an interesting capture, and clearly a result of the shift of social as a traffic driver. It also feels a lot like what was written as we shifted to SEO’ed headlines, but maybe that’s just the ever-wasser
in me talking.
Over at the New Yorker Adam Gopnik draws an interesting paralell between drunk driving and guns:
If one needs more hope, one can find it in the history of the parallel fight against drunk driving. When that began, using alcohol and then driving was regarded as a trivial or a forgivable offense. Thanks to the efforts of MADD and the other groups, drunk driving became socially verboten, and then highly regulated, with some states now having strong “ignition interlock” laws that keep drunks from even turning the key. Drunk driving has diminished, we’re told, by as much as ten per cent per year in some recent years. Along with the necessary, and liberty-limiting, changes in seat-belt enforcement and the like, car culture altered. The result? The number of roadway fatalities in 2011 was the lowest since 1949. If we can do with maniacs and guns what we have already done with drunks and cars, we’d be doing fine. These are hard fights, but they can be won.
Quick and sort of crazy story of a guy in Las Vegas whose house seems to be some sort of default GPS coordinate for some lost Sprint phones:
Dobson was told that cellphone GPS systems don’t provide exact locations – they give a general location of where to start your search. And for some reason his house is that location for his area.
People keep turning up at his house and demanding he give them their phones back.
Often the stories of unintended consequences of technology are much more interesting than the intended ones.
Like many, I’ve been reading everything I can find since I heard that Aaron Swartz had committed suicide. He’s not someone I knew, but certainly someone I paid attention to and read pretty frequently. He also had one of the best definitions of blogging I read:
So that’s what this blog is. I write here about thoughts I have, things I’m working on, stuff I’ve read, experiences I’ve had, and so on. Whenever a thought crystalizes in my head, I type it up and post it here. I don’t read over it, I don’t show it to anyone, and I don’t edit it — I just post it. … I don’t consider this writing, I consider this thinking.
Anyway, of all this stuff I’ve been reading about the case, his impact on the world and everything else, I found this description of where he, and more broadly cultural activist hackers, fit into the historical context very interesting:
I knew Swartz, although not well. And while he was special on account of his programming abilities, in another way he was not special at all: he was just another young man compelled to act rashly when he felt strongly, regardless of the rules. In another time, a man with Swartz’s dark drive would have headed to the frontier. Perhaps he would have ventured out into the wilderness, like T. E. Lawrence or John Muir, or to the top of something death-defying, like Reinhold Messner or Philippe Petit. Swartz possessed a self-destructive drive toward actions that felt right to him, but that were also defiant and, potentially, law-breaking. Like Henry David Thoreau, he chased his own dreams, and he was willing to disobey laws he considered unjust.
Interesting take on technology and automation in the form of a blind coffee taste test of hand-made espresso versus Nespresso machine:
Does this herald the death of artisan coffee, except in those exclusive enclaves where the very best, most obsessive practitioners ply their trade? And is the writing on the wall for other areas of human excellence where we cling to the idea that artisanal is best? A lifeline might seem to be provided by the detailed reviews of the coffees we tasted. The key descriptors for Nespresso were ‘smooth’ and ‘easy to drink’. And from the point of view of restaurateurs who use it, the key word is ‘consistency’. It was far from bland, but it was not challenging or distinctive either. It’s a coffee everyone can really like but few will love: the highest common denominator, if you like. The second-place coffee had more bite, and was the favourite of myself and the 10-cup-a-day connoisseur, but scored a pathetic two points from one person on the panel who took against it.
First off, that makes me a little sad because I really like making espresso and it makes me feel a bit like a sucker who is drinking inferior coffee. What’s interesting to me, and the article sort of gets at, is that I enjoy a cup of coffee I make as much for the process as the taste. Their is something really nice about going through the grind, tamp, pour, clean, drink cycle (at least when you’re making it on your own). Second, the points in here remind me of something I read in a New Yorker article about Dogfish Head beer a few years ago
. The article pointed out that even the crankiest craft brewer respects Budweiser for their ability to create a consistent product.
Mother Jones has a short piece about the effects of “negative consequences of vituperative online comments for the public understanding of science” (aka comment trolling):
The researchers were trying to find out what effect exposure to such rudeness had on public perceptions of nanotech risks. They found that it wasn’t a good one. Rather, it polarized the audience: Those who already thought nanorisks were low tended to become more sure of themselves when exposed to name-calling, while those who thought nanorisks are high were more likely to move in their own favored direction. In other words, it appeared that pushing people’s emotional buttons, through derogatory comments, made them double down on their preexisting beliefs.
Because I can’t really let anything get away without being some sort of McLuhan reference, the conclusion pretty clearly lays out the fact that the medium is shaping the message we receive:
The upshot of this research? This is not your father’s media environment any longer. In the golden oldie days of media, newspaper articles were consumed in the context of…other newspaper articles. But now, adds Scheufele, it’s like “reading the news article in the middle of the town square, with people screaming in my ear what I should believe about it.”
I’ve been trying to get through my Instapaper backlog lately. It’s a kind of New Years resolution thing, but mostly a reaction to reading books for awhile. That’s not all that important except to explain why I’ll probably be posting some old stuff over the coming weeks.
Anyway, I was struck reading this post from 2009 by Kevin Kelly on technology and how he explained the clock in a very McLuhan’esque way:
Seemingly simple inventions like the clock had profound social consequences. The clock divvied up an unbroken stream of time into measurable units, and once it had a face, time became a tyrant, ordering our lives. Danny Hillis, computer scientist, believes the gears of the clock spun out science, and all it’s many cultural descendents. He says, “The mechanism of the clock gave us a metaphor for self-governed operation of natural law. (The computer, with its mechanistic playing out of predetermined rules, is the direct descendant of the clock.) Once we were able to imagine the solar system as a clockwork automaton, the generalization to other aspects of nature was almost inevitable, and the process of Science began.”
One of the best ways to judge just how interesting something really is is to see whether you’re still thinking about it days from then. Anyway, another stop on my Instapaper archaeology was this excellent New Republic book review that talks about the relationship between the work of Jane Jacobs and Robert Moses. It’s a pretty balanced affair that suggests that Jacobs may not have been as perfect an urban planner as she has since been painted and Moses may not have been the devil incarnate. I’ll leave the conclusion for you to read on your own, but here’s a quick snippet on where Jacobs doesn’t necessarily work for the realities of the city:
The Death and Life of Great American Cities argues that at least one hundred homes per acre are necessary to support exciting stores and restaurants, but that two hundred homes per acre is a “danger mark.” After that point of roughly six-story buildings, Jacobs thought that neighborhoods risked sterile standardization. (The one public housing project that Jacobs blessed, at least initially, had only five stories.) But keeping great cities low means that far too few people can enjoy the benefits of city life. Jacobs herself had the strange idea that preventing new construction would keep cities affordable, but a single course in economics would have taught her the fallacy of that view. If booming demand collides against restricted supply, then prices will rise.
This paragraph from The Awl on the possibility of a coffee drought wins the day:
Can you imagine? Think about how unpleasant people are already, with coffee. Think about how unpleasant people are about coffee. And I’m not even talking about your garden-variety dickheads who debate the merits of pour-over brew versus the Estonian flatiron reverse-osmosis method, which is probably a thing even though I just made it up. I’m talking about the people who are all, “I can’t start the day without coffee,” as if the rest of us aren’t just as tired and irritable without feeling the apparently deep-seated need to broadcast just how dependent we are on hot water dripped through crushed beans to help us contend with the arduous tasks of getting to work and turning on a computer. These are the people we’re going to have to club to death first during our grim, coffeeless future, which the New Scientist> (registration required) sees as coming “by 2080.” Oh, wait, 65 years? We’ll all be long dead by then. Never mind.
The Chronicle of Higher Education has an interesting article on notes, which, as the article points out, is something we all constantly interact with and seldom discuss. Here’s a bit on digital note-taking systems:
Digital note-taking systems were a direct outgrowth of the early hypertext knowledge-representation systems. I had my first encounter with one of those when I arrived at the Xerox Palo Alto Research Center in the mid-1980s. In addition to their better-known innovations (the laser printer, the WYSIWYG text editor, the graphical user interface, the Ethernet), the center’s researchers developed the system Notecards. It was a thing of wonder, back when the computer could still induce that feeling. You could create notecards containing text or graphics, sort them into file boxes, and link them according whatever relationship you chose (“source,” “example,” etc.), while navigating the whole network via an overview in a browser window. It was as close as you could come to a digital implementation of Placcius’s cabinet, freed from the material constraints of slips, hooks, and drawers and from the requirement that each slip fill only one slot in a network.
Two little bits on this: First, reading through this made me think a lot about this blog, which I’ve always sort of thought of as a notebook. Posts here are much more often notes in margins than they are fully-formed ideas. Second, it makes me think of an article I’ve read over a bunch of times on how Steven Johnson uses a tool called DevonThink to help him write books
Finally, this line in the essay made me laugh: “The Post-it ranks as one of modern chemistry’s two major contributions to the work of annotation, as partial reparation for the highlighter pen, the colorist’s revenge on the printed page.”
I like this little story on Quora from Stewart Butterfield, one of the co-founders of Flickr. In response to why the company dropped the “e”, he explains it was because the guy who owned the flicker.com domain wouldn’t sell. But then he goes on to give this extra anecdote:
Bonus story: for a long time when I searched Google for “flickr” I got a “Did you mean flicker?” suggestion. I knew we’d have “made it” when that stopped. Eventually that message did stop showing up … and by 2005 or 2006 the search results page even asked “Did you mean flickr?” when searching for “flicker”. That’s when I knew it was big! (Google seems to have stopped doing that since.)
It would be great to collect the stories from all the founders who saw their products go big about when they knew they had “made it”.
I’ve been listening to a lot of podcasts lately, and one of them is New Yorker’s Out Loud. The last episode featured a great interview with Daniel Mendelsohn, a literary critic. In the podcast he mostly talks about the books that inspired him to become a writer, but then, towards the end, he talks a bit about the job of a cultural critic and I thought what he had to say was interesting enough to transcribe and share:
We now have these technologies that simulate reality or create different realities in very sophisticated and interesting ways. Having these technologies available to us allows us to walk, say, through midtown Manhattan but actually to be inhabiting our private reality as we do so: We’re on the phone or we’re looking at our smartphone, gazing lovingly into our iPhones. And this is the way the world is going, there’s no point complaining about it. But where my classics come in is I am amused by the fact our word idiot comes from the Greek word idiotes, which means a private person. It’s from the word idios, which means private as opposed to public. So the Athenians, or the Greeks in general who had such a highly developed sense of the readical distinction between what went on in public and what went on in private, thought that a person that brought his private life into public spaces, who confused public and private was an idiote, was an idiot. Of course, now everybody does this. We are in a culture of idiots in the Greek sense. To go back to your original question, what does this look like in the long run? Is it terrible or is it bad? It’s just the way things are. And one of the advantages about being a person who looks at long stretches of the past is you try not to get hysterical, to just see these evolving new ways of being from an imaginary vantage point in the future. Is it the end of the world? No, it’s just the end of a world. It’s the end of the world I grew up in when I was thinking of how you behaved in public. I think your job as a cultural critic is to take a long view.
I obviously thought the idiot stuff was fascinating, but also was interested in his last line about the job of a cultural critic, which, to me, really reflected something that struck me about McLuhan in the most recent biography of his by Douglas Coupland:
Marshall was also encountering a response that would tail him the rest of his life: the incorrect belief that he liked the new world he was describing. In fact, he didn’t ascribe any moral or value dimensions to it at all–he simply kept on pointing out the effects of new media on the individual. And what makes him fresh and relevant now is the fact that (unlike so much other new thinking of the time) he always did focus on the individual in society, rather than on the mass of society as an entity unto itself.
I’m guessing you heard about this, but earlier this year the University of California introduced a new identity system. It looked something like this:
As people are wont to do, they freaked out. In fact, they freaked out enough that the University eventually decided to drop the new logo. Now, with the controversy in the rearview mirror, I’ve read/listened to a few post-mortems on how and why something like this happened and I felt like chiming in. My credentials, like most commenters, are pretty thin, but I think they give me an interesting perspective. Beyond spending a sort of ridiculous amount of time thinking about brands, overseeing a product team including three designers and previously working in advertising overseeing creative teams for some time, I also built Brand Tags, the largest free database of perceptions about brands. I am, however, not a designer.
That last bit, especially, shapes my perception on conversations about design.
Okay, with disclosures behind us, a bit more background: When this new logo was introduced to the public (though apparently it had been on a roadshow for some time before it showed up on the web), it was misinterpreted as a replacement for the official seal of the University of California system. That seal looks like this:
This, apparently, was inaccurate. The new logo would not be replacing the seal, but rather helping to unify the various logos that had popped up across the different UC schools (the script Cal and UCLA logos are two examples). As occasionally happens the digerati spread an idea that wasn’t true. I know this isn’t shocking, but to be fair to all the bloggers on this one, the University hardly helped its case when it produced this video as a companion piece to explain the new identity:
I know this all seems like a slightly exhaustive bit of background, especially if you’ve been following this story, but I think it’s all important. In a long piece on RockPaperInk which spurred this piece, Christopher Simmons, a designer and former AIGA president, writes:
“Designers too often judge logos separate from their system…without understanding that one can’t function without the other,” criticized Paula Scher when I asked her views on the controversy, “It’s the kit of parts that creates a contemporary visual language and makes an identity recognizable, not just the logo. But often the debate centers on whether or not someone likes the form of the logo, or whether the kerning is right.” While acknowledging that all details are important, Scher also calls these quibbles “silly.” “No designer on the outside of the organization at hand is really qualified to render an informed opinion about a massive identity system until it’s been around and in practice for about a year,” she explains, “One has to observe it functioning in every form of media to determine the entire effect. This [was] especially true in the UC case.”
Which I mostly agree with. Logos don’t exist outside the system (for the most part) and, even more importantly, they don’t exist outside the collective consciousness they grow up in. This is something I got in quite a few arguments about while I was running Brand Tags. I would get an email from a company no one had ever heard of asking for me to post their logo, to which I invariably responded “no”. My reasoning, as I explained at the time, was that the point of the site was to measure brand perception and for people to have a perception, you need a brand, which you don’t have if no one knows who you are. Brands, as I’ve expressed in the past, live in people’s heads. They are the sum total of perceptions about them.
This is part of what makes it so tough to judge any sort of logo: Lack of context. Even if you see the way the system works, you don’t have the rest of the context that would come with experiencing it in the wild. If you’re a high school senior and the new UC logo is on a sweatshirt worn by the girl you had a crush on that’s home for her freshman Christmas break it’s going to have a very different meaning than if you’re first encounter is in the US News & World Reports list of top US universities. Context shapes experience and we can’t forget that.
Which makes something Simmons writes later so confusing for me:
Design as a discipline is challenged by this notion of democracy, particularly in a viral age. We have become a culture mistrustful of expertise—in particular creative expertise. I share [UC Creative Director] Correa’s fear that this cultural position stifles design as designers increasingly lose ownership of the discourse. “If deep knowledge in these fields is weighed against the “likes” and “tastes” of the populace at large,” she warns, “We will create a climate that does not encourage visual or aesthetic exploration, play or inventiveness, since the new is often soundly refused.”
Most of the article, actually, is blaming the public (and designers specifically) for the way they misinterpreted and criticized the logo. That truth, however, is at least in part due to the context they experienced the logo in. It’s near impossible, for instance, to not walk away from that introductory video believing that the logo is replacing the seal and that was produced by the University itself. Design, I’d posit, is about far more than the logo or even the system, it’s the story that exists around the brand as a whole and the designer is, at least in part, responsible for how that story is told. I agree with part of what’s written above: Design is a tough discipline because everyone has an opinion. But that’s not really new and it’s been lamented to death. People know what fonts are and many have heard of kerning or played with Photoshop. This is just the reality we live in. We can choose to ignore that reality and think we can put things out in the world without hearing from many people who are “unqualified” to have opinions or we can acknowledge that and try to spend as much time thinking about the context people are first experiencing new identities as we spend on the identities themselves. It’s not a simple solution, but it’s a whole lot more sustainable.
Finally, we need to recognize that with this new world we all live in, where everyone has an opinion about everything (let’s not pretend that design is the only victim to this reality), that its going to be harder than ever to stand behind convictions. On the one hand this can mean “a climate that does not encourage visual or aesthetic exploration, play or inventiveness,” as the UC Creative Director says, or it can mean that we need to do more to educate everyone involved in the decision-making process of what’s to come. We need to help them understand the design process, the effect of context and the potential for backlash (with our plan on how to deal with it).
Or we can do boring stuff.
Though I didn’t quote it anywhere here, a lot of my thinking in this piece was shaped by the very even coverage on this issue from 99% Invisible, which I would highly recommend listening to.
Last year I listed out my five favorite pieces of longform writing and it seemed to go over pretty well, so I figured I’d do the same again this year. It was harder to compile the list this year, as my reading took me outside just Instapaper (especially to the fantastic Longform app for iPad), but I’ve done my best to pull these together based on what I most enjoyed/found most interesting/struck me the most.
One additional note before I start my list: To make this process slightly more simple next year I’ve decided to start a Twitter feed that pulls from my Instapaper and Readability favorites. You can find it at @HeyItsInstafavs. Okay, onto the list.
- The Yankee Comandante (New Yorker): Last year David Grann took my top spot with A Murder Foretold and this year he again takes it with an incredible piece on William Morgan, an American soldier in the Cuban revolution. The article was impressive enough that George Clooney bought up the rights and is apparently planning to direct a film about the story. The thing about David Grann is that beyond being an incredible reporter and storyteller, he’s also just an amazing writer. I’m not really a reader who sits there and examines sentences, I read for story and ideas. But a few sentences, and even paragraphs, in this piece made me take notice. While we’re on David Grann, I also read his excellent book of essays this year (most of which come from the New Yorker), The Devil & Sherlock Holmes. He is, without a doubt, my favorite non-fiction writer working right now.
- Raise the Crime Rate (n+1): This article couldn’t be more different than the first. Rather than narrative non-fiction, this is an interesting, and well-presented, arguments towards abolishing the prison system. The basic thesis of the piece is that we’ve made a terrible ethical decision in the US to offload crime from our cities to our prisions, where we let people get raped and stabbed with little-to-no recourse. The solution presented is to abolish the prison system (while also increasing capital punishment). Rare is an article that you don’t necessarily agree with, but walk away talking and thinking about. That’s why this piece made my list. I read it again last week and still don’t know where I stand, but I know it’s worthy of reading and thinking about. (While I was trying to get through my Instapaper backlog I also came across this Atul Gawande piece from 2009 on solitary confinement and its effects on humans.)
- Open Your Mouth & You’re Dead (Outside): A look at the totally insane “sport” of freediving, where athletes swim hundreds of feet underwater on a single breath (and often come back to the surface passed out). This is scary and crazy and exciting and that’s reason enough to read something, right?
- Jerry Seinfeld Intends to Die Standing Up (New York Times): I’ve been meaning to write about this but haven’t had a chance yet. Last year HBO had this amazing special called Talking Funny in which Ricky Gervais, Chris Rock, Louis CK and Jerry Seinfeld sit around and chat about what it’s like to be the four funniest men in the world. The format was amazing: Take the four people who are at the top of their profession and see what happens. But what was especially interesting, to me at least, was the deference the other three showed to Seinfeld. I knew he was accomplished, but I didn’t know that he commanded the sort of respect amongst his peers that he does. Well, this Times article expands on that special and explains what makes Seinfeld such a unique comedian and such a careful crafter of jokes. (For more Seinfeld stuff make sure to check out his new online video series, Comedians in Cars Getting Coffee, which is just that.)
- The Malice at the Palace (Grantland): I would say as a publication Grantland outperformed just about every other site on the web this year and so this pick is part acknowledgement of that and part praise for a pretty amazing piece of reporting (I guess you could call an oral history that, right?). Anyway, this particular oral history is about the giant fight that broke out in Detroit at a Pacers v. Pistons game that spilled into a fight between the Pistons and the Detroit fans. It was an ugly mark for basketball and an incredibly memorable (and insane) TV event. As a sort of aside on this, I’ve been casually reading Bill Simmons’ Book of Basketball and in it he obviously talks about this game/fight. In fact, he calls it one of his six biggest TV moments, which he judges using the following criteria: “How you know an event qualifies: Will you always remember where you watched it? (Check.) Did you know history was being made? (Check.) Would you have fought anyone who tried to change the channel? (Check.) Did your head start to ache after a while? (Check.) Did your stomach feel funny? (Check.) Did you end up watching about four hours too long? (Check.) Were there a few ‘can you believe this’–type phone calls along the way? (Check.) Did you say ‘I can’t believe this’ at least fifty times?” I agree with that.
And, like last year, there are a few that were great but didn’t make the cut. Here’s two more:
- Snow Fall (New York Times): Everyone is going crazy about this because of the crazy multimedia experience that went along with it, but I actually bought the Kindle single and read it in plain old black and white and it was still pretty amazing. Also, John Branch deserves to be on this list because he wrote something that would have made my list last year had it not come out in December: Punched Out is the amazing and sad story of Derek Boogaard and what it’s like to be a hockey enforcer.
- Marathon Man (New Yorker): A very odd, but intriguing, “expose” on a dentist who liked to chat at marathons.
That’s it. I’ve made a Readlist with these seven selections which makes it easy to send them all to your Kindle or Readability. Good reading.
I haven’t seen Django Unchained yet (though I want to, and I loved Inglorious Basterds), but I found this insight into Tarantino’s process very interesting. From a New York Times interview with the director:
I have a writer’s journey going on and a filmmaker’s journey going on, and obviously they’re symbiotic, but they also are separate. When I write my scripts it’s not really about the movie per se, it is about the page. It’s supposed to be literature. I write stuff that’s never going to make it in the movie and stuff that I know wouldn’t even be right for the movie, but I’ll put it in the screenplay. We’ll decide later do we shoot it, do we not shoot it, whatever, but it’s important for the written work.
I think about this at Percolate sometimes and always err on the side of over-documentation. I like the idea of building a narrative around something that extends far beyond what’s necessary, as the additional context creates an important background for decisions. In Tarantino’s case, I have to imagine part of the reason he gets such good performances out of the actors in his films is that they’re given such a rich text to work with.
More on decisions, this time about how our ability to make them is actually a finite resource:
Willpower—the popular idea is that it’s something that you use to resist temptation and to make yourself work. But they’ve also found that this same energy is used in making decisions, simply deciding what to have for lunch, what to do at a meeting; all these things deplete the same resource. After a while, when you’ve depleted this resource, it’s a state called ego depletion. You’ve got less self-control, you’re more prone to give in to temptation, it’s harder for you to work, and you tend to make worse decisions.
As I was digging through my old Instapapers while I was away (I read like a madman and hardly got through any), I came across this article about Obama from 2010. This little story about trying to make fewer decisions really struck me:
Rahm Emanuel tells a story. The time is last December, when the White House was juggling an agenda that included the Afghanistan troop surge, the health-care bill, the climate talks in Copenhagen, and Obama’s acceptance of a Nobel Peace Prize that threatened to do him more political harm than good—one issue on top of another. It got to the point where Obama and Emanuel would joke that, when it was all over, they were going to open a T-shirt stand on a beach in Hawaii. It would face the ocean and sell only one color and one size. “We didn’t want to make another decision, or choice, or judgment,” Emanuel told me. They took to beginning staff meetings with Obama smiling at Emanuel and simply saying “White,” and Emanuel nodding back and replying “Medium.”
It’s especially interesting when you add this nugget from Michael Lewis’s October piece on the president (which I haven’t read yet, but this quote came across my internets somehow):
“You’ll see I wear only gray or blue suits,” he said. “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” He mentioned research that shows the simple act of making decisions degrades one’s ability to make further decisions. It’s why shopping is so exhausting. “You need to focus your decision-making energy. You need to routinize yourself. You can’t be going through the day distracted by trivia.”
I was reading this New Yorker piece about the Grateful Dead at my friend Colin’s recommendation and I liked the notion of “blesh”:
“More Than Human” is a sci-fi novel, published in 1953, in which a band of exceptional people “blesh” (that is, blend and mesh) their consciousness to create a kind of super-being. “I turned everyone on to that book in, like, 1965,” Lesh said. “ ‘This is what we can do; this is what we can be.’”
Which reminded me a bit of scenius:
The musician and artist Brian Eno coined the odd but apt word “scenius” to describe the unusual pockets of group creativity and invention that emerge in certain intellectual or artistic scenes: philosophers in 18th-century Scotland; Parisian artists and intellectuals in the 1920s. In Eno’s words, scenius is “the communal form of the concept of the genius.” New York hasn’t yet reached those heights in terms of internet innovation, but clearly something powerful has happened. There is genuine digital-age scenius on its streets. This is good news for my city, of course, but it’s also an important case study for any city that wishes to encourage innovative business. How did New York pull it off?
Kevin Kelly has a good article at Wired.com about our robotic future. He writes about our ability to invent new things to do as our old activities are replaced by machines:
Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. Two hundred years ago not a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.
Apparently Zaha Hadid is working on a new building in China and it’s being pirated … AS SHE’S BUILDING THE ORIGINAL. This sounds like a weird William Gibson future world:
But the appeal of the Prtizker Prize winner’s experimental architecture, especially since the unveiling of her glowing, crystalline Guangzhou Opera House two years ago, has expanded so explosively that a contingent of pirate architects and construction teams in southern China is now building a carbon copy of one of Hadid’s Beijing projects.
What’s worse, Hadid said in an interview, she is now being forced to race these pirates to complete her original project first.
[Via Ed Cotton]
Walking around Tokyo today I passed a Bathing Ape store on got onto the topic of how the brand came to be. After a little Googling I ran across this excellent article that documents the fall of the brand and eventually to this interesting theory on “cultural arbitrage”:
The hipster elite are starting to show annoyance at this development. Former mo wax guru James Lavelle, quoted in Tokion, lamented that it is now impossible to stay “underground.” Lavelle and his kindred folk profit from exploiting cultural arbitrage: taking information from inaccessible sources and cashing in on that unequal access to information. (In general, a lot of people whom you probably think are cooler than you make a bulk of their money from this inequality in information.) No one in the West knew that Bape is a mainstream brand in Japan, and therefore, Lavelle was able to subtly and indirectly create the brand image to his own liking…* Now, with the high speed “information superhighway,” profit from cultural arbitrage business looks doubtful in the long run.
It’s not revolutionary, but it’s a nice way to think about how culture moves.
* I had to cut out a few sentences because they talk about how financial arbitrage used to work but no longer does, which just isn’t true.
The New Yorker has a really interesting blog post about how the 2nd amendment came to mean what many now believe it to mean. Turns out we didn’t always see things the way we do:
Enter the modern National Rifle Association. Before the nineteen-seventies, the N.R.A. had been devoted mostly to non-political issues, like gun safety. But a coup d’état at the group’s annual convention in 1977 brought a group of committed political conservatives to power—as part of the leading edge of the new, more rightward-leaning Republican Party. (Jill Lepore recounted this history in a recent piece for The New Yorker.) The new group pushed for a novel interpretation of the Second Amendment, one that gave individuals, not just militias, the right to bear arms. It was an uphill struggle. At first, their views were widely scorned. Chief Justice Warren E. Burger, who was no liberal, mocked the individual-rights theory of the amendment as “a fraud.”
The article goes on to explain how interesting it is that this represents a “living” constitution that adapts with the times, something conservatives generally fight against:
But the N.R.A. kept pushing—and there’s a lesson here. Conservatives often embrace “originalism,” the idea that the meaning of the Constitution was fixed when it was ratified, in 1787. They mock the so-called liberal idea of a “living” constitution, whose meaning changes with the values of the country at large. But there is no better example of the living Constitution than the conservative re-casting of the Second Amendment in the last few decades of the twentieth century. (Reva Siegel, of Yale Law School, elaborates on this point in a brilliant article.)
I’ve always kind of wondered what made cashmere so much more expensive than wool other than the fact it’s softer. Slate has an answer:
Its costly production process and scarcity. Cashmere comes from the soft undercoat of goats bred to produce the wool. It takes more than two goats to make a single two-ply sweater. The fibers of the warming undercoat must be separated from a coarser protective top coat during the spring molting season, a labor-intensive process that typically involves combing and sorting the hair by hand. These factors contribute to the relatively low global production rate of cashmere—approximately 30,000 pounds a year compared to about 3 billion pounds of sheep’s wool.
So there you have it. Undercoats is the answer.
Before I left for my trip to Asia I went to see Zero Dark Thirty, the movie about the hunt for, and ultimately killing of, Osama Bin Laden. Before, and after, seeing it I had read quite a bit about the raid, the movie and the controversy around both. I thought maybe it would be worth collecting all this stuff into a post, so that’s what I’m doing.
First, on the movie itself. A lot of people really like it (the most interesting point Denby makes in this podcast is the idea that this and Lincoln spell the end of auteur theory as they show the power of the writer/director combo). I thought it was pretty okay. In reading around, I think Roger Ebert sums up my opinions best in his review of the film:
My guess is that much of the fascination with this film is inspired by the unveiling of facts, unclearly seen. There isn’t a whole lot of plot — basically, just that Maya thinks she is right, and she is. The back story is that Bigelow has become a modern-day directorial heroine, which may be why this film is winning even more praise than her masterful Oscar-winner “The Hurt Locker.” That was a film firmly founded on plot, character and actors whose personalities and motivations became well-known to the audience. Its performances are razor-sharp and detailed, the acting restrained, the timing perfect.
In comparison, “Zero Dark Thirty” is a slam-bang action picture, depending on Maya’s inspiration. One problem may be that Maya turns out to be correct, with a long, steady build-up depriving the climax of much of its impact and providing mostly irony. Do we want to know more about Osama bin Laden and al Qaida and the history and political grievances behind them? Yes, but that’s not how things turned out. Sorry, but there you have it.
One thing that I found particularly interesting in the film was the very short sequence on the doctor who had gone around Abbottabad under the cover of vaccination who was actually collecting DNA. I remembered reading about him in the original New Yorker account of the raid and thought that had made clear he had been successful in collected DNA evidence (it turns out the article says he wasn’t, the same way it’s presented in the film). January’s GQ has a longer account of what happened to the doctor who helped the CIA and tries to get at whether he was successful in his mission. (The answers: He was tortured/imprisioned by the Pakistani government for assisting the Americans and, as to whether he got evidence, it’s still unclear.)
If you’re interested in more reading on the subject, No Easy Day, an account by a Navy Seal on the mission is a fast and interesting read. And although I haven’t read it, my friend Colin Nagy highly recommends The Triple Agent, which covers what happened at Khost, where a Jordanian triple agent beat CIA intelligence and security to bomb a military base and kill a sizable group of CIA operatives (there’s a scene in Zero Dark Thirty about it, though the film offers no real depth on what happened).
This New England Journal of Medicine editorial has a really interesting stat I haven’t seen anywhere else about how gun owners feel about gun laws:
These proposals enjoy broad support. In fact, public-opinion polls have shown that 75 to 85% of firearm owners, including specifically members of the National Rifle Association (NRA) in some cases, endorse comprehensive background checks and denial for misdemeanor violence; 60 to 70% support denial for alcohol abuse. (It is deeply ironic that our current firearm policies omit regulations that are endorsed by firearm owners, let alone by the general public.)
Unfortunately there’s no citation, but I’d be really interested to know how aligned gun owners are with the NRA.
This is just strange:
Diving Chess is a chess variant, which is played in a swimming pool. Instead of using chess clocks, each player must submerge themselves underwater during their turn, only to resurface when they are ready to make a move. Players must make a move within 5 seconds of resurfacing (they will receive a warning if not, and three warnings will result in a forfeit). Diving Chess was invented by American Chess Master Etan Ilfeld; the very first exhibition game took place between Ilfeld and former British Chess Champion William Hartston at the Thirdspace gym in Soho on August 2nd, 2011. Hartston won the match which lasted almost two hours such that each player was underwater for an entire hour.
More fun Christmasy stuff, this time it’s from The Week and comes in the form of a doctor examining the true extent of the injuries to the burglars in Home Alone. I’m partial to his explanation of the effect of the burning-hot doorknob:
If this doorknob is glowing visibly red in the dark, it has been heated to about 751 degrees Fahrenheit, and Harry gives it a nice, strong, one- to two-second grip. By comparison, one second of contact with 155 degree water is enough to cause third degree burns. The temperature of that doorknob is not quite hot enough to cause Harry’s hand to burst into flames, but it is not that far off… Assuming Harry doesn’t lose the hand completely, he will almost certainly have other serious complications, including a high risk for infection and ‘contracture’ in which resulting scar tissue seriously limits the flexibility and movement of the hand, rendering it less than 100 percent useful. Kevin has moved from ‘defending his house’ into sheer malice, in my opinion.
Interesting perspective (and data) on the effect of online retailing and the general environment on malls:
I agree with the above perspectives, although I believe they likely understate the eventual impact on malls. A report from Co-Star observes that there are more than 200 malls with over 250,000 square feet that have vacancy rates of 35% or higher, a “clear marker for shopping center distress.” These malls are becoming ghost towns. They are not viable now and will only get less so as online continues to steal retail sales from brick-and-mortar stores. Continued bankruptcies among historic mall anchors will increase the pressure on these marginal malls, as will store closures from retailers working to optimize their business. Hundreds of malls will soon need to be repurposed or demolished. Strong malls will stay strong for a while, as retailers are willing to pay for traffic and customers from failed malls seek offline alternatives, but even they stand in the path of the shift of retail spending from offline to online.
Living in New York it’s easy to forget the importance of malls in retail. Haven’t ever completely understood why that is exactly, but the malls in New York (the only two I can think of off the top of my head are South Street Seaport and Herald Square) feel like afterthoughts and are filled with stores that feel out of place in an otherwise retail-hungry city.
The letters also provide another important piece of information—fingerprints. We run these through databases maintained by the FBI, CIA, NSA, Interpol, MI6, and the Mossad. If we find a match, it goes straight on the Naughty List. We also harvest a saliva sample from the flap of the envelope in which the letter arrives in order to establish a baseline genetic identity for each correspondent. This is used to determine if there might be an inherent predisposition for naughtiness. A detailed handwriting analysis is performed as part of a comprehensive personality workup, and tells us which children are advancing nicely with their cursive and which are still stubbornly forming block letters with crayons long past the age when this is appropriate.
I really enjoyed this FT review of a few books on the origin of words and misspellings. Especially interesting was this note on how dictionaries came to represent current language:
Why did the editors of Webster’s Third drop this lexicographic A-bomb (another addition to the dictionary)? Because views on dictionaries, indeed on language itself, had changed. Instead of laying down rules on how people should write and speak, dictionaries became records of how people did write and speak. And that meant all the people, not just those who spoke the educated language of New England. The new trends in lexicography went along with the growth of scientific method and Charles Darwin’s theory of evolution: lexicographers observed what was happening to the language, rather than handing down precepts.
My sister sent me this link to the ten best Muppet Christmas moments and it was conspicuously missing my all-time favorite Muppet moment from A Muppet Family Christmas. All the Muppets turn up at Fozzie’s mom’s house for Christmas even though Doc (from Fraggle Rock) was renting it as a quiet escape. As Bert and Ernie come in this conversation happens between the three of them:
Ernie: Oh, hi there, we’re Ernie and Bert.
Doc: Well, hi there yourself, I’m Doc.
Bert: Oh, did you know that Doc starts with the letter D?
Doc: Why, yes.
Ernie: Yes! Yes, starts with the letter Y.
Ernie: And true starts with the letter T.
Doc: What is this?
Bert: Where we come from this is small talk.
That line gets me every time.
This was a little tidbit I picked up at the Most Contagious event in London. Turns out EA’s FIFA 13 is set up to work with Xbox. Specifically, if you swear at the TV while you’re playing the game you’ll be penalized:
But the system will also listen for players whose frustration gets the better of them. Swearing at the referee will influence their decision-making, possibly leading to more bookings. EA says that in Fifa 13’s career mode, gamers will find that storylines will develop if they acquire a reputation for abusing referees.
I’ve been reading Bill Simmons’ Book of Basketball and in one of the footnotes he mentioned that the NBA (and all the other sports leagues) has a contingency plan in the case of a team losing all its players to a horrific accident. I guess it’s not surprising, but it’s kind of crazy to read the rules from this 1992 New York Times article. Here’s how it would work in basketball:
The National Basketball Association has a contingency plan that goes into effect if five or more players on any team “die or are dismembered,” according to Rod Thorn, the league’s operations director. The league would permit only five players on every other club to be protected, insuring that a fairly good player — the sixth best — could be drafted by the club suffering the tragedy. Each of the contributing clubs could lose only one player.
« Older posts | Newer posts »