I really like situations that help describe the fact that lots of factors ultimately go into the way you feel about a brand/design/marketing. I wrote a bit about how Jony Ive feels about it last week and I thought this was another interesting example from a very different place. In the early 90s a designer named Alexander Juilian was given the opportunity to redesign the UNC Tarheels basketball uniform. He was a huge Tarheels fan and thus felt a ton of pressure to deliver something amazing. Not wanting to leave things to chance, he looped Michael Jordan into the decision (Jordan, at the time, was just starting his ascent to the greatest player in the history of the NBA but he was already UNC royalty). Ultimately Julian sent all the designs to Jordan to let him sign off on his favorite:
“And guess what? As soon as Michael [Jordan] said that [the argyle design was his favorite], then the entire team also liked the argyle best. So we made the first uniform in Michael’s size, sent it to Chicago, he worked out in it, then we sent it down to Chapel Hill. There was near frenzy, I’m told, in the locker room as to who was going to be the first Carolina player to put it on after Michael because they wanted Michael’s mojo. Hubert Davis (photo, above right) won, he was the same size and he was the model. Now he’s a great sportscaster.
Ran across an interesting quote (reportedly) by Jony Ive about the difference between measurable (speed, hard drive size, etc.) attributes and the non-measurable ones:
But there are a lot of product attributes that don’t have those sorts of measures. Product attributes that are more emotive and less tangible. But they’re really important. There’s a lot of stuff that’s really important that you can’t distill down to a number. And I think one of the things with design is that when you look at an object you make many many decisions about it, not consciously, and I think one of the jobs of a designer is that you’re very sensitive to trying to understand what goes on between seeing something and filling out your perception of it. You know we all can look at the same object, but we will all perceive it in a very unique way. It means something different to each of us. Part of the job of a designer is to try to understand what happens between physically seeing something and interpreting it.
I think about this a lot. One of the things that inspired Brand Tags originally was a similar quote from my friend Martin Bihl’s 2002 AdWeek article: ”The way I look at it, a brand only exists in the consumer’s mind. That other product isn’t a brand yet because consumers don’t really know about it. It’s still a product.”
I’m playing around with publishing in a few different places these days. Trying out Medium for the first time where I wrote a piece on designing and building for states:
Although I may be bastardizing the term from an engineering point of view, when I talk about states I mean all the possible outcomes of a new feature: What happens when you press this button, or that button, or those buttons together, or we get this data back but not that data. Bugs, for the most part, are a matter of overlooked states. From a design perspective, states are about thinking through all the different ways the elements on the page might live and interact. This includes obvious ones like empty states and error messages as well as not-so-obvious ones like random button combinations or accidental page refreshes.
I wrote a piece over at Forbes.com about some stuff I’ve been thinking about lately, specifically how to start to understand the shifts we’re seeing in social. Here’s the opening two paragraphs:
A few months ago I was asked to put together a presentation about the future of social. As would be expected, I was pretty overwhelmed with the topic and turned it over and over in my head trying to figure out the best way to approach the question. Whenever I find myself in a situation like this I turn to my personal intellectual hero and the person I believe to be the greatest media thinker of the 20th Century, Marshall McLuhan. While he wrote long before the web existed, his theories around how media evolves and interacts with culture are more relevant than they’ve ever been.
At the heart of McLuhan’s theories is his most famous saying: “The medium is the message.” Though like most things McLuhan it requires a fair amount of unpacking, at its core is the idea that we’re affected more by our interactions with the medium itself than we are with the content we experience on it. “The ‘message’ of any medium or technology,” McLuhan explained, “is the change of scale or pace or pattern that it introduces into human affairs.” In his book “Understanding Media” he goes on to give an example: “The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure.” In other words, it realigned personal expectations and culture and expanded the definition of local.
Last week I sent myself an email with this quote from Felix Salmon’s blog post about the media’s response to the Boston mayhem:
There’s an art to working out where to find fast and reliable information, and to judging new information in light of old information, and to judging old information in light of new information. And there’s an art to synthesizing everything you know, from hundreds of different sources, into a single coherent narrative. It’s not easy, it’s not a skill that most people have, and it’s precisely where news organizations add value.
I have been thinking (and talking) a fair amount about media literacy lately and this quote seemed to sum up the challenge really nicely. Media literacy is a very hard thing to nail down because, unlike regular literacy, it’s pretty hard to test for. Ultimately I think it’s about making sure people understand the role of media (whatever that might mean) in the way they experience and interact with the world around them. That can mean simply being aware that the order of the Google results you’re seeing are most likely not the same as mine (even for the same query) to, as Felix says, “working out where to find fast and reliable information.” Media literacy, like regular literacy I guess, is a scale (one with an ever-moving endgame, but I guess the same could be said of language).
Anyway, when I read this quote from Felix I thought of a few different games I like to play with myself that, although I never really thought of them that way, were sort of mini media-literacy tests:
The Snopes Test: When you read something, especially an email from a distant family member, you can immediately sniff out whether you’re going to be able to find an entry for it on Snopes. Knowing whether it’s been proven true or false is good for bonus points.
The GoDaddy Test: This is a funny one, and definitely less useful than the Snopes test, but it’s interesting to predict whether a random domain name someone comes up with is already taken or not.
The Wikipedia Test: This one is actually important I think. If I gave you a random topic, say Snoop Dogg, could you predict whether or not Wikipedia would be the top result? (Bonus points if you predict the actual top result.)
Anyway, all these things help illustrate the challenge with media literacy, which is ultimately it’s an “I know it when I see it” skill. With that said, it’s one that will continue to have a larger and larger impact on culture as more people’s voices (and information) become a part of the news we all sift through on a daily basis.
I Tweeted this, but I thought it was worth sharing (and I’m trying to blog more). From the New England Journal of Medicine (which I grabbed the RSS for years ago and am always excited to run across), a bit of a post-mortem on the medical response to the Boston Marathon bombings. The whole thing is interesting (and very different than most of the stories on the bombing you’ll read), but the most interesting tidbit to me was this:
Although most health care providers in the United States have never treated a bombing victim, lessons learned by military surgeons, emergency physicians, and nurses in Iraq and Afghanistan are progressively percolating through the trauma care community.
I find stories of how new products and technology get adopted quite fascinating. While propaganda is much more associated with politics than brands, there’s a long history of companies using some of the same tactics to sway public opinion in favor of their product. The two examples that come to mind for me are stories like the diamond myth and Listerine’s introduction of halitosis.
Anyway, a podcast I’ve been listening to recently, 99% Invisible, recently covered one of these public relations campaigns that ultimately lead to the acceptance of cars (and invention of “jaywalking”). At the time, in the early 20s, cars were killing lots of people who weren’t used to sharing the streets with them. The car industry had to do something so they pushed a campaign that has now become familiar to us by way of the NRA: Cars don’t kill people, bad drivers kill people. But more interesting, to me at least, was where jaywalking came from:
In the early 20th Century, “jay” was a derogatory term for someone from the countryside. Therefore, a “jaywalker” is someone who walks around the city like a jay, gawking at all the big buildings, and who is oblivious to traffic around him. The term was originally used to disparage those who got in the way of other pedestrians, but Motordom rebranded it as a legal term to mean someone who crossed the street at the wrong place or time.
[Editor's Note: This turned a bit rambly and I'm definitely out of my zone talking about the law, so feel free to skip if you're not up for a non-lawyers opinion on the law after reading two articles about it.]
Sorry, but I’ve got some time this morning and, like many of you I’m sure, I’m spending it reading as much as I can about yesterday’s situation in Boston. If you were watching TV while the second suspect, Dzhokhar Tsarnaev, was found or listening later during the press conference, the question of whether he would be/was read his Miranda rights came up. In the moments after the capture there was some confusion, which was eventually cleared up during the press conference when the US Attorney Carmen Ortiz confirmed that he had not been read his Miranda rights under the “public safety” exception. I, like most I’d imagine, had never heard of the public safety exception before yesterday (or spent much time thinking about Miranda rights, to be honest).
Slate had an excellent explanation of what happened and why it’s a dangerous precedent:
And so the FBI will surely ask 19-year-old Tsarnaev anything it sees fit. Not just what law enforcement needs to know to prevent a terrorist threat and keep the public safe but anything else it deemed related to “valuable and timely intelligence.” Couldn’t that be just about anything about Tsarnaev’s life, or his family, given that his alleged accomplice was his older brother (killed in a shootout with police)? There won’t be a public uproar. Whatever the FBI learns will be secret: We won’t know how far the interrogation went. And besides, no one is crying over the rights of the young man who is accused of killing innocent people, helping his brother set off bombs that were loaded to maim, and terrorizing Boston Thursday night and Friday. But the next time you read about an abusive interrogation, or a wrongful conviction that resulted from a false confession, think about why we have Miranda in the first place. It’s to stop law enforcement authorities from committing abuses. Because when they can make their own rules, sometime, somewhere, they inevitably will.
This is one of those things where I don’t know quite how to feel. The FBI has a pretty extensive article on the subject that shed some additional light (I know the FBI is probably not the most balanced outlet for this sort of stuff, but the article is a pretty good and comprehensible look at the history of the law and, also, the FBI is very incentivized to get this stuff right since if they don’t any questions could be thrown out). The public safety exception was apparently introduced in a case where the police were chasing a rapist who, the victim informed them, had a gun. When they cornered him in a grocery he had an empty holster and the police asked where the gun was. The man, Benjamin Quarles, told the police where he hid the gun and they retrieved it. The court excluded the gun because the police had not read Quarles his rights. The ruling was appealed and eventually reached the supreme court, who decided that in situations where public safety was endangered suspects could be questioned without being read their Miranda rights. (I’m not entirely sure why I’m summarizing all this and I’d suggest reading the whole article.)
Anyway, the more interesting case, also mentioned in the FBI piece, seems like a case where the police raided an apartment in Brooklyn where two suspected bombers lived. “During the raid, both men were shot and wounded as one of them grabbed the gun of a police officer and the other crawled toward a black bag believed to contain a bomb. When the officers looked inside the black bag, they saw pipe bombs and observed that a switch on one bomb was flipped.” From there, the police used the public safety exception to question one of the bombers who had not yet been read his rights:
Officers went to the hospital to question Abu Mezer about the bombs. They asked Abu Mezer “how many bombs there were, how many switches were on each bomb, which wires should be cut to disarm the bombs, and whether there were any timers.” Abu Mezer answered each question and also was asked whether he planned to kill himself in the explosion. He responded by saying, “Poof.”
This case seems, at least to me, to be much closer to the root of the question. I don’t really understand how a gun hidden in a supermarket presents a public safety concern since presumably the police could search the market for the gun after arresting the suspect. However, this latter situation, where there was a big bag of bombs, some of them ready to explode, seems like a pretty reasonable time to question someone prior to their rights being read.
What’s interesting about this, though, is the question isn’t really whether you can question someone before their rights are read, since it’s obviously possible (and likely a frequent occurrence). But rather, in what situations can those questions be used in court against the suspect. Here, again, I agree with Slate: If the questions they asked Tsarnaev were about whether he had planted more bombs around Boston, then that’s fair game, but as soon as they move outside that things start to feel a lot less right.
Interesting, the FBI article goes on to explain that Abu Mezer, from the bag of bombs, felt the same way and eventually tried to get his last statement, about whether he intended to kill himself, thrown out:
Abu Mezer sought to suppress each of his statements, but the trial court permitted them, ruling that they fell within the public safety exception. On appeal, Abu Mezer only challenged the admissibility of the last question, whether he intended to kill himself when detonating the bombs. He claimed the question was unrelated to public safety. The circuit court disagreed and noted “Abu Mezer’s vision as to whether or not he would survive his attempt to detonate the bomb had the potential for shedding light on the bomb’s stability.”
Here, without reading the full decision or being a lawyer or knowing anything else about the case, I think I disagree with the court. Seems pretty thin to suggest that the police were given valuable information about the “stability” of the bomb by asking whether he intended to kill himself.
Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.
As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:
Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.
Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.
To be clear, I’m not saying this is a good or a bad thing (I do my best to stay away from that sort of stuff), but it’s definitely a real thing and an integral part of how we all experience culture today. When I opened the newspaper this morning it was as much to see how much I knew and how closely I’d followed as it was to learn something new about the chase. After reading the cover story that recounted the previous day’s events I turned to Brian Stetler’s appropriately titled News Media and Social Media Become Part of a Real-Time Manhunt Drama.
Bruce Schneier on how one might hack the papal election. Here’s one of the extra security measures he’d add:
I would also add some kind of white-glove treatment to prevent a scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate’s name in full provides some resistance against this sort of attack
I’ve written in the past about how a big part of what separated McLuhan from the rest of the pack was his ability to separate his morals from his observations. Well, I particularly liked this explanation of McLuhan’s approach from the introduction to the newest edition of The Gutenberg Galaxy: “We have to remember that Marshall McLuhan portrayed himself as an explorer and not as an explainer of media environments.”
Although I must admit I’ve never actually made it all the way through a David McCullough book, I really enjoyed this interview with him and particularly this explanation of his writing process (with a typewriter):
I love putting paper in. I love the way the keys come up and actually print the letters. I love it when I swing that carriage and the bell rings like an old trolley car. I love the feeling of making something with my hands. People say, But with a computer you could go so much faster. Well, I don’t want to go faster. If anything, I should go slower. I don’t think all that fast. They say, But you could change things so readily. I can change things very readily as it is. I take a pen and draw a circle around what I want to move up or down or wherever and then I retype it. Then they say, But you wouldn’t have to retype it. But when I’m retyping I’m also rewriting. And I’m listening, hearing what I’ve written. Writing should be done for the ear. Rosalee reads aloud wonderfully and it’s a tremendous help to me to hear her speak what I’ve written. Or sometimes I read it to her. It’s so important. You hear things that are wrong, that call for editing.
Makes me want to buy a typewriter.
In this essay about McLuhan’s Gutenberg Galaxy is a pretty good summation of his approach to media theorizing:
While book-lovers sometimes deride the blog/tweet/Facebook post/text message/YouTube video/surfing/gaming/Skyping world we’ve created, I don’t think proclaiming it right or wrong, or better or worse, is useful. I prefer McLuhan’s approach which is simply to ask: how far has new media seeped into popular consciousness?
I’ve been thinking about big data lately. Mostly I’ve been trying to articulate why it’s a big deal, which I know, but isn’t often put succinctly. Recently I had a thought that the reason it’s such a big deal is because it means we can move away from using samples to infer to using actuals to understand. That seems really obvious, but it wasn’t a connection I had made before (though I sort of think it was obvious to everyone else).
Anyway, on the big data tip, I found this Wired piece on “long data” pretty interesting (even though I thought I was going to hate it based on the title). The gist:
By “long” data, I mean datasets that have massive historical sweep — taking you from the dawn of civilization to the present day. The kinds of datasets you see in Michael Kremer’s “Population growth and technological change: one million BC to 1990,” which provides an economic model tied to the world’s population data for a million years; or in Tertius Chandler’s Four Thousand Years of Urban Growth, which contains an exhaustive dataset of city populations over millennia. These datasets can humble us and inspire wonder, but they also hold tremendous potential for learning about ourselves.
Not the most in-depth piece in the world, but I like new concepts, and this is one.
There’s something magical about the first few moments of a new medium, as people experiment and try to figure out what it’s all about. It’s a period of uncertainty as a small group of people fumble with new technology and it’s fun to watch. Go back and read early Tweets or look at early Instagram photos and you get the equivalent of tapping the mic to see if it’s on.
I say this because I stumbled onto Vinepeek this morning, which shows a continuous stream of new Vines from Twitter. (For the uninitiated Vine is a new product Twitter announced that lets people make 6-second looping videos.) Watching Vinepeek, I got to thinking that there was something really fascinating with combining a new technology people are getting acquainted to with an API that people can make experimental outputs of. It’s like letting people play with the input and the output at the same time, and in the case of Vinepeek you get a very odd thing that feels like a little TV network that peeks into people’s lives.
I’m sure it won’t be interesting in a few days, but there’s a real magic to combining experimentation on creation and distribution at the exact same time.
We’re doing a little conferencey thing on Monday in NYC for community managers and I just wanted to take the change to invite some of you. We don’t have a ton of spots left, but if you’re interested drop me an email (noah at percolate).
To celebrate #CMGRs, Percolate is hosting a small invite-only event expanding on our SPEAKEASY Happy Hours called SPEAKEASY #CMAD. It’ll be a day full of learnings from brands, agencies and platforms including incredible speakers from LinkedIn, Denny’s, Getty Images, GE, MasterCard, Tumblr, IPG Media Labs and American Express.
I know I’ve mentioned this in the past, but I really love to read or listen to people who know lots about something (like music) describe why someone within that discipline (like a band) is good. Reading this little retrospective on Nirvana and Smells Like Teen Spirit feels a little like that. An excerpt:
Nirvana had the right song at the right time. Guys like these guitar store guys had seemingly never heard anything quite like it. They had been listening to classic rock, likely some Pat Metheney, Leo Kotke, and prog rock, no doubt. They probably listened at one point or another to some Clash and Ramones. But this song represented something significantly different for them. It astonished them like a shiny object. They hit repeat again.
I liked Nirvana, but I didn’t know why. (Or I did know why: Because MTV told me it was “good”.) You hear a lot of people say they were overrated or underrated or rated just right, but they seldom give any historical context for why they mattered at the time other than some weird cultural Gen X explanation. I’m not sure how true this explanation is, but it comes from a very interesting place.
Over at The Awl, Choire Sicha has an interesting little piece on how headlines have changed in the last few years. The gist:
Here’s a flashback. In 2007, a popular video of a baby getting dropkicked by a breakdancer (hard to believe I just typed that) was headlined “Times Square Still Extremely Unsafe for Children” on Gawker, which is pretty so-so as a headline but still funny. There’s no way that would get that headline now. (“Breakdancing in Times Square – Baby goes flying!” was the YouTube video headline.) “Watch This Baby Get Drop-Kicked By a Subway Breakdancer” is what I’d predict for our age. You have to really tell the folks on Twitter what’s happening for your clicks ‘n’ shares, you see.
It’s an interesting capture, and clearly a result of the shift of social as a traffic driver. It also feels a lot like what was written as we shifted to SEO’ed headlines, but maybe that’s just the ever-wasser
in me talking.
Over at the New Yorker Adam Gopnik draws an interesting paralell between drunk driving and guns:
If one needs more hope, one can find it in the history of the parallel fight against drunk driving. When that began, using alcohol and then driving was regarded as a trivial or a forgivable offense. Thanks to the efforts of MADD and the other groups, drunk driving became socially verboten, and then highly regulated, with some states now having strong “ignition interlock” laws that keep drunks from even turning the key. Drunk driving has diminished, we’re told, by as much as ten per cent per year in some recent years. Along with the necessary, and liberty-limiting, changes in seat-belt enforcement and the like, car culture altered. The result? The number of roadway fatalities in 2011 was the lowest since 1949. If we can do with maniacs and guns what we have already done with drunks and cars, we’d be doing fine. These are hard fights, but they can be won.
Quick and sort of crazy story of a guy in Las Vegas whose house seems to be some sort of default GPS coordinate for some lost Sprint phones:
Dobson was told that cellphone GPS systems don’t provide exact locations – they give a general location of where to start your search. And for some reason his house is that location for his area.
People keep turning up at his house and demanding he give them their phones back.
Often the stories of unintended consequences of technology are much more interesting than the intended ones.
Like many, I’ve been reading everything I can find since I heard that Aaron Swartz had committed suicide. He’s not someone I knew, but certainly someone I paid attention to and read pretty frequently. He also had one of the best definitions of blogging I read:
So that’s what this blog is. I write here about thoughts I have, things I’m working on, stuff I’ve read, experiences I’ve had, and so on. Whenever a thought crystalizes in my head, I type it up and post it here. I don’t read over it, I don’t show it to anyone, and I don’t edit it — I just post it. … I don’t consider this writing, I consider this thinking.
Anyway, of all this stuff I’ve been reading about the case, his impact on the world and everything else, I found this description of where he, and more broadly cultural activist hackers, fit into the historical context very interesting:
I knew Swartz, although not well. And while he was special on account of his programming abilities, in another way he was not special at all: he was just another young man compelled to act rashly when he felt strongly, regardless of the rules. In another time, a man with Swartz’s dark drive would have headed to the frontier. Perhaps he would have ventured out into the wilderness, like T. E. Lawrence or John Muir, or to the top of something death-defying, like Reinhold Messner or Philippe Petit. Swartz possessed a self-destructive drive toward actions that felt right to him, but that were also defiant and, potentially, law-breaking. Like Henry David Thoreau, he chased his own dreams, and he was willing to disobey laws he considered unjust.
Interesting take on technology and automation in the form of a blind coffee taste test of hand-made espresso versus Nespresso machine:
Does this herald the death of artisan coffee, except in those exclusive enclaves where the very best, most obsessive practitioners ply their trade? And is the writing on the wall for other areas of human excellence where we cling to the idea that artisanal is best? A lifeline might seem to be provided by the detailed reviews of the coffees we tasted. The key descriptors for Nespresso were ‘smooth’ and ‘easy to drink’. And from the point of view of restaurateurs who use it, the key word is ‘consistency’. It was far from bland, but it was not challenging or distinctive either. It’s a coffee everyone can really like but few will love: the highest common denominator, if you like. The second-place coffee had more bite, and was the favourite of myself and the 10-cup-a-day connoisseur, but scored a pathetic two points from one person on the panel who took against it.
First off, that makes me a little sad because I really like making espresso and it makes me feel a bit like a sucker who is drinking inferior coffee. What’s interesting to me, and the article sort of gets at, is that I enjoy a cup of coffee I make as much for the process as the taste. Their is something really nice about going through the grind, tamp, pour, clean, drink cycle (at least when you’re making it on your own). Second, the points in here remind me of something I read in a New Yorker article about Dogfish Head beer a few years ago
. The article pointed out that even the crankiest craft brewer respects Budweiser for their ability to create a consistent product.
Mother Jones has a short piece about the effects of “negative consequences of vituperative online comments for the public understanding of science” (aka comment trolling):
The researchers were trying to find out what effect exposure to such rudeness had on public perceptions of nanotech risks. They found that it wasn’t a good one. Rather, it polarized the audience: Those who already thought nanorisks were low tended to become more sure of themselves when exposed to name-calling, while those who thought nanorisks are high were more likely to move in their own favored direction. In other words, it appeared that pushing people’s emotional buttons, through derogatory comments, made them double down on their preexisting beliefs.
Because I can’t really let anything get away without being some sort of McLuhan reference, the conclusion pretty clearly lays out the fact that the medium is shaping the message we receive:
The upshot of this research? This is not your father’s media environment any longer. In the golden oldie days of media, newspaper articles were consumed in the context of…other newspaper articles. But now, adds Scheufele, it’s like “reading the news article in the middle of the town square, with people screaming in my ear what I should believe about it.”
I’ve been trying to get through my Instapaper backlog lately. It’s a kind of New Years resolution thing, but mostly a reaction to reading books for awhile. That’s not all that important except to explain why I’ll probably be posting some old stuff over the coming weeks.
Anyway, I was struck reading this post from 2009 by Kevin Kelly on technology and how he explained the clock in a very McLuhan’esque way:
Seemingly simple inventions like the clock had profound social consequences. The clock divvied up an unbroken stream of time into measurable units, and once it had a face, time became a tyrant, ordering our lives. Danny Hillis, computer scientist, believes the gears of the clock spun out science, and all it’s many cultural descendents. He says, “The mechanism of the clock gave us a metaphor for self-governed operation of natural law. (The computer, with its mechanistic playing out of predetermined rules, is the direct descendant of the clock.) Once we were able to imagine the solar system as a clockwork automaton, the generalization to other aspects of nature was almost inevitable, and the process of Science began.”
One of the best ways to judge just how interesting something really is is to see whether you’re still thinking about it days from then. Anyway, another stop on my Instapaper archaeology was this excellent New Republic book review that talks about the relationship between the work of Jane Jacobs and Robert Moses. It’s a pretty balanced affair that suggests that Jacobs may not have been as perfect an urban planner as she has since been painted and Moses may not have been the devil incarnate. I’ll leave the conclusion for you to read on your own, but here’s a quick snippet on where Jacobs doesn’t necessarily work for the realities of the city:
The Death and Life of Great American Cities argues that at least one hundred homes per acre are necessary to support exciting stores and restaurants, but that two hundred homes per acre is a “danger mark.” After that point of roughly six-story buildings, Jacobs thought that neighborhoods risked sterile standardization. (The one public housing project that Jacobs blessed, at least initially, had only five stories.) But keeping great cities low means that far too few people can enjoy the benefits of city life. Jacobs herself had the strange idea that preventing new construction would keep cities affordable, but a single course in economics would have taught her the fallacy of that view. If booming demand collides against restricted supply, then prices will rise.
This paragraph from The Awl on the possibility of a coffee drought wins the day:
Can you imagine? Think about how unpleasant people are already, with coffee. Think about how unpleasant people are about coffee. And I’m not even talking about your garden-variety dickheads who debate the merits of pour-over brew versus the Estonian flatiron reverse-osmosis method, which is probably a thing even though I just made it up. I’m talking about the people who are all, “I can’t start the day without coffee,” as if the rest of us aren’t just as tired and irritable without feeling the apparently deep-seated need to broadcast just how dependent we are on hot water dripped through crushed beans to help us contend with the arduous tasks of getting to work and turning on a computer. These are the people we’re going to have to club to death first during our grim, coffeeless future, which the New Scientist> (registration required) sees as coming “by 2080.” Oh, wait, 65 years? We’ll all be long dead by then. Never mind.
The Chronicle of Higher Education has an interesting article on notes, which, as the article points out, is something we all constantly interact with and seldom discuss. Here’s a bit on digital note-taking systems:
Digital note-taking systems were a direct outgrowth of the early hypertext knowledge-representation systems. I had my first encounter with one of those when I arrived at the Xerox Palo Alto Research Center in the mid-1980s. In addition to their better-known innovations (the laser printer, the WYSIWYG text editor, the graphical user interface, the Ethernet), the center’s researchers developed the system Notecards. It was a thing of wonder, back when the computer could still induce that feeling. You could create notecards containing text or graphics, sort them into file boxes, and link them according whatever relationship you chose (“source,” “example,” etc.), while navigating the whole network via an overview in a browser window. It was as close as you could come to a digital implementation of Placcius’s cabinet, freed from the material constraints of slips, hooks, and drawers and from the requirement that each slip fill only one slot in a network.
Two little bits on this: First, reading through this made me think a lot about this blog, which I’ve always sort of thought of as a notebook. Posts here are much more often notes in margins than they are fully-formed ideas. Second, it makes me think of an article I’ve read over a bunch of times on how Steven Johnson uses a tool called DevonThink to help him write books
Finally, this line in the essay made me laugh: “The Post-it ranks as one of modern chemistry’s two major contributions to the work of annotation, as partial reparation for the highlighter pen, the colorist’s revenge on the printed page.”
I like this little story on Quora from Stewart Butterfield, one of the co-founders of Flickr. In response to why the company dropped the “e”, he explains it was because the guy who owned the flicker.com domain wouldn’t sell. But then he goes on to give this extra anecdote:
Bonus story: for a long time when I searched Google for “flickr” I got a “Did you mean flicker?” suggestion. I knew we’d have “made it” when that stopped. Eventually that message did stop showing up … and by 2005 or 2006 the search results page even asked “Did you mean flickr?” when searching for “flicker”. That’s when I knew it was big! (Google seems to have stopped doing that since.)
It would be great to collect the stories from all the founders who saw their products go big about when they knew they had “made it”.
I’ve been listening to a lot of podcasts lately, and one of them is New Yorker’s Out Loud. The last episode featured a great interview with Daniel Mendelsohn, a literary critic. In the podcast he mostly talks about the books that inspired him to become a writer, but then, towards the end, he talks a bit about the job of a cultural critic and I thought what he had to say was interesting enough to transcribe and share:
We now have these technologies that simulate reality or create different realities in very sophisticated and interesting ways. Having these technologies available to us allows us to walk, say, through midtown Manhattan but actually to be inhabiting our private reality as we do so: We’re on the phone or we’re looking at our smartphone, gazing lovingly into our iPhones. And this is the way the world is going, there’s no point complaining about it. But where my classics come in is I am amused by the fact our word idiot comes from the Greek word idiotes, which means a private person. It’s from the word idios, which means private as opposed to public. So the Athenians, or the Greeks in general who had such a highly developed sense of the readical distinction between what went on in public and what went on in private, thought that a person that brought his private life into public spaces, who confused public and private was an idiote, was an idiot. Of course, now everybody does this. We are in a culture of idiots in the Greek sense. To go back to your original question, what does this look like in the long run? Is it terrible or is it bad? It’s just the way things are. And one of the advantages about being a person who looks at long stretches of the past is you try not to get hysterical, to just see these evolving new ways of being from an imaginary vantage point in the future. Is it the end of the world? No, it’s just the end of a world. It’s the end of the world I grew up in when I was thinking of how you behaved in public. I think your job as a cultural critic is to take a long view.
I obviously thought the idiot stuff was fascinating, but also was interested in his last line about the job of a cultural critic, which, to me, really reflected something that struck me about McLuhan in the most recent biography of his by Douglas Coupland:
Marshall was also encountering a response that would tail him the rest of his life: the incorrect belief that he liked the new world he was describing. In fact, he didn’t ascribe any moral or value dimensions to it at all–he simply kept on pointing out the effects of new media on the individual. And what makes him fresh and relevant now is the fact that (unlike so much other new thinking of the time) he always did focus on the individual in society, rather than on the mass of society as an entity unto itself.
I’m guessing you heard about this, but earlier this year the University of California introduced a new identity system. It looked something like this:
As people are wont to do, they freaked out. In fact, they freaked out enough that the University eventually decided to drop the new logo. Now, with the controversy in the rearview mirror, I’ve read/listened to a few post-mortems on how and why something like this happened and I felt like chiming in. My credentials, like most commenters, are pretty thin, but I think they give me an interesting perspective. Beyond spending a sort of ridiculous amount of time thinking about brands, overseeing a product team including three designers and previously working in advertising overseeing creative teams for some time, I also built Brand Tags, the largest free database of perceptions about brands. I am, however, not a designer.
That last bit, especially, shapes my perception on conversations about design.
Okay, with disclosures behind us, a bit more background: When this new logo was introduced to the public (though apparently it had been on a roadshow for some time before it showed up on the web), it was misinterpreted as a replacement for the official seal of the University of California system. That seal looks like this:
This, apparently, was inaccurate. The new logo would not be replacing the seal, but rather helping to unify the various logos that had popped up across the different UC schools (the script Cal and UCLA logos are two examples). As occasionally happens the digerati spread an idea that wasn’t true. I know this isn’t shocking, but to be fair to all the bloggers on this one, the University hardly helped its case when it produced this video as a companion piece to explain the new identity:
I know this all seems like a slightly exhaustive bit of background, especially if you’ve been following this story, but I think it’s all important. In a long piece on RockPaperInk which spurred this piece, Christopher Simmons, a designer and former AIGA president, writes:
“Designers too often judge logos separate from their system…without understanding that one can’t function without the other,” criticized Paula Scher when I asked her views on the controversy, “It’s the kit of parts that creates a contemporary visual language and makes an identity recognizable, not just the logo. But often the debate centers on whether or not someone likes the form of the logo, or whether the kerning is right.” While acknowledging that all details are important, Scher also calls these quibbles “silly.” “No designer on the outside of the organization at hand is really qualified to render an informed opinion about a massive identity system until it’s been around and in practice for about a year,” she explains, “One has to observe it functioning in every form of media to determine the entire effect. This [was] especially true in the UC case.”
Which I mostly agree with. Logos don’t exist outside the system (for the most part) and, even more importantly, they don’t exist outside the collective consciousness they grow up in. This is something I got in quite a few arguments about while I was running Brand Tags. I would get an email from a company no one had ever heard of asking for me to post their logo, to which I invariably responded “no”. My reasoning, as I explained at the time, was that the point of the site was to measure brand perception and for people to have a perception, you need a brand, which you don’t have if no one knows who you are. Brands, as I’ve expressed in the past, live in people’s heads. They are the sum total of perceptions about them.
This is part of what makes it so tough to judge any sort of logo: Lack of context. Even if you see the way the system works, you don’t have the rest of the context that would come with experiencing it in the wild. If you’re a high school senior and the new UC logo is on a sweatshirt worn by the girl you had a crush on that’s home for her freshman Christmas break it’s going to have a very different meaning than if you’re first encounter is in the US News & World Reports list of top US universities. Context shapes experience and we can’t forget that.
Which makes something Simmons writes later so confusing for me:
Design as a discipline is challenged by this notion of democracy, particularly in a viral age. We have become a culture mistrustful of expertise—in particular creative expertise. I share [UC Creative Director] Correa’s fear that this cultural position stifles design as designers increasingly lose ownership of the discourse. “If deep knowledge in these fields is weighed against the “likes” and “tastes” of the populace at large,” she warns, “We will create a climate that does not encourage visual or aesthetic exploration, play or inventiveness, since the new is often soundly refused.”
Most of the article, actually, is blaming the public (and designers specifically) for the way they misinterpreted and criticized the logo. That truth, however, is at least in part due to the context they experienced the logo in. It’s near impossible, for instance, to not walk away from that introductory video believing that the logo is replacing the seal and that was produced by the University itself. Design, I’d posit, is about far more than the logo or even the system, it’s the story that exists around the brand as a whole and the designer is, at least in part, responsible for how that story is told. I agree with part of what’s written above: Design is a tough discipline because everyone has an opinion. But that’s not really new and it’s been lamented to death. People know what fonts are and many have heard of kerning or played with Photoshop. This is just the reality we live in. We can choose to ignore that reality and think we can put things out in the world without hearing from many people who are “unqualified” to have opinions or we can acknowledge that and try to spend as much time thinking about the context people are first experiencing new identities as we spend on the identities themselves. It’s not a simple solution, but it’s a whole lot more sustainable.
Finally, we need to recognize that with this new world we all live in, where everyone has an opinion about everything (let’s not pretend that design is the only victim to this reality), that its going to be harder than ever to stand behind convictions. On the one hand this can mean “a climate that does not encourage visual or aesthetic exploration, play or inventiveness,” as the UC Creative Director says, or it can mean that we need to do more to educate everyone involved in the decision-making process of what’s to come. We need to help them understand the design process, the effect of context and the potential for backlash (with our plan on how to deal with it).
Or we can do boring stuff.
Though I didn’t quote it anywhere here, a lot of my thinking in this piece was shaped by the very even coverage on this issue from 99% Invisible, which I would highly recommend listening to.
Last year I listed out my five favorite pieces of longform writing and it seemed to go over pretty well, so I figured I’d do the same again this year. It was harder to compile the list this year, as my reading took me outside just Instapaper (especially to the fantastic Longform app for iPad), but I’ve done my best to pull these together based on what I most enjoyed/found most interesting/struck me the most.
One additional note before I start my list: To make this process slightly more simple next year I’ve decided to start a Twitter feed that pulls from my Instapaper and Readability favorites. You can find it at @HeyItsInstafavs. Okay, onto the list.
- The Yankee Comandante (New Yorker): Last year David Grann took my top spot with A Murder Foretold and this year he again takes it with an incredible piece on William Morgan, an American soldier in the Cuban revolution. The article was impressive enough that George Clooney bought up the rights and is apparently planning to direct a film about the story. The thing about David Grann is that beyond being an incredible reporter and storyteller, he’s also just an amazing writer. I’m not really a reader who sits there and examines sentences, I read for story and ideas. But a few sentences, and even paragraphs, in this piece made me take notice. While we’re on David Grann, I also read his excellent book of essays this year (most of which come from the New Yorker), The Devil & Sherlock Holmes. He is, without a doubt, my favorite non-fiction writer working right now.
- Raise the Crime Rate (n+1): This article couldn’t be more different than the first. Rather than narrative non-fiction, this is an interesting, and well-presented, arguments towards abolishing the prison system. The basic thesis of the piece is that we’ve made a terrible ethical decision in the US to offload crime from our cities to our prisions, where we let people get raped and stabbed with little-to-no recourse. The solution presented is to abolish the prison system (while also increasing capital punishment). Rare is an article that you don’t necessarily agree with, but walk away talking and thinking about. That’s why this piece made my list. I read it again last week and still don’t know where I stand, but I know it’s worthy of reading and thinking about. (While I was trying to get through my Instapaper backlog I also came across this Atul Gawande piece from 2009 on solitary confinement and its effects on humans.)
- Open Your Mouth & You’re Dead (Outside): A look at the totally insane “sport” of freediving, where athletes swim hundreds of feet underwater on a single breath (and often come back to the surface passed out). This is scary and crazy and exciting and that’s reason enough to read something, right?
- Jerry Seinfeld Intends to Die Standing Up (New York Times): I’ve been meaning to write about this but haven’t had a chance yet. Last year HBO had this amazing special called Talking Funny in which Ricky Gervais, Chris Rock, Louis CK and Jerry Seinfeld sit around and chat about what it’s like to be the four funniest men in the world. The format was amazing: Take the four people who are at the top of their profession and see what happens. But what was especially interesting, to me at least, was the deference the other three showed to Seinfeld. I knew he was accomplished, but I didn’t know that he commanded the sort of respect amongst his peers that he does. Well, this Times article expands on that special and explains what makes Seinfeld such a unique comedian and such a careful crafter of jokes. (For more Seinfeld stuff make sure to check out his new online video series, Comedians in Cars Getting Coffee, which is just that.)
- The Malice at the Palace (Grantland): I would say as a publication Grantland outperformed just about every other site on the web this year and so this pick is part acknowledgement of that and part praise for a pretty amazing piece of reporting (I guess you could call an oral history that, right?). Anyway, this particular oral history is about the giant fight that broke out in Detroit at a Pacers v. Pistons game that spilled into a fight between the Pistons and the Detroit fans. It was an ugly mark for basketball and an incredibly memorable (and insane) TV event. As a sort of aside on this, I’ve been casually reading Bill Simmons’ Book of Basketball and in it he obviously talks about this game/fight. In fact, he calls it one of his six biggest TV moments, which he judges using the following criteria: “How you know an event qualifies: Will you always remember where you watched it? (Check.) Did you know history was being made? (Check.) Would you have fought anyone who tried to change the channel? (Check.) Did your head start to ache after a while? (Check.) Did your stomach feel funny? (Check.) Did you end up watching about four hours too long? (Check.) Were there a few ‘can you believe this’–type phone calls along the way? (Check.) Did you say ‘I can’t believe this’ at least fifty times?” I agree with that.
And, like last year, there are a few that were great but didn’t make the cut. Here’s two more:
- Snow Fall (New York Times): Everyone is going crazy about this because of the crazy multimedia experience that went along with it, but I actually bought the Kindle single and read it in plain old black and white and it was still pretty amazing. Also, John Branch deserves to be on this list because he wrote something that would have made my list last year had it not come out in December: Punched Out is the amazing and sad story of Derek Boogaard and what it’s like to be a hockey enforcer.
- Marathon Man (New Yorker): A very odd, but intriguing, “expose” on a dentist who liked to chat at marathons.
That’s it. I’ve made a Readlist with these seven selections which makes it easy to send them all to your Kindle or Readability. Good reading.
I haven’t seen Django Unchained yet (though I want to, and I loved Inglorious Basterds), but I found this insight into Tarantino’s process very interesting. From a New York Times interview with the director:
I have a writer’s journey going on and a filmmaker’s journey going on, and obviously they’re symbiotic, but they also are separate. When I write my scripts it’s not really about the movie per se, it is about the page. It’s supposed to be literature. I write stuff that’s never going to make it in the movie and stuff that I know wouldn’t even be right for the movie, but I’ll put it in the screenplay. We’ll decide later do we shoot it, do we not shoot it, whatever, but it’s important for the written work.
I think about this at Percolate sometimes and always err on the side of over-documentation. I like the idea of building a narrative around something that extends far beyond what’s necessary, as the additional context creates an important background for decisions. In Tarantino’s case, I have to imagine part of the reason he gets such good performances out of the actors in his films is that they’re given such a rich text to work with.
More on decisions, this time about how our ability to make them is actually a finite resource:
Willpower—the popular idea is that it’s something that you use to resist temptation and to make yourself work. But they’ve also found that this same energy is used in making decisions, simply deciding what to have for lunch, what to do at a meeting; all these things deplete the same resource. After a while, when you’ve depleted this resource, it’s a state called ego depletion. You’ve got less self-control, you’re more prone to give in to temptation, it’s harder for you to work, and you tend to make worse decisions.
As I was digging through my old Instapapers while I was away (I read like a madman and hardly got through any), I came across this article about Obama from 2010. This little story about trying to make fewer decisions really struck me:
Rahm Emanuel tells a story. The time is last December, when the White House was juggling an agenda that included the Afghanistan troop surge, the health-care bill, the climate talks in Copenhagen, and Obama’s acceptance of a Nobel Peace Prize that threatened to do him more political harm than good—one issue on top of another. It got to the point where Obama and Emanuel would joke that, when it was all over, they were going to open a T-shirt stand on a beach in Hawaii. It would face the ocean and sell only one color and one size. “We didn’t want to make another decision, or choice, or judgment,” Emanuel told me. They took to beginning staff meetings with Obama smiling at Emanuel and simply saying “White,” and Emanuel nodding back and replying “Medium.”
It’s especially interesting when you add this nugget from Michael Lewis’s October piece on the president (which I haven’t read yet, but this quote came across my internets somehow):
“You’ll see I wear only gray or blue suits,” he said. “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” He mentioned research that shows the simple act of making decisions degrades one’s ability to make further decisions. It’s why shopping is so exhausting. “You need to focus your decision-making energy. You need to routinize yourself. You can’t be going through the day distracted by trivia.”
I was reading this New Yorker piece about the Grateful Dead at my friend Colin’s recommendation and I liked the notion of “blesh”:
“More Than Human” is a sci-fi novel, published in 1953, in which a band of exceptional people “blesh” (that is, blend and mesh) their consciousness to create a kind of super-being. “I turned everyone on to that book in, like, 1965,” Lesh said. “ ‘This is what we can do; this is what we can be.’”
Which reminded me a bit of scenius:
The musician and artist Brian Eno coined the odd but apt word “scenius” to describe the unusual pockets of group creativity and invention that emerge in certain intellectual or artistic scenes: philosophers in 18th-century Scotland; Parisian artists and intellectuals in the 1920s. In Eno’s words, scenius is “the communal form of the concept of the genius.” New York hasn’t yet reached those heights in terms of internet innovation, but clearly something powerful has happened. There is genuine digital-age scenius on its streets. This is good news for my city, of course, but it’s also an important case study for any city that wishes to encourage innovative business. How did New York pull it off?
Kevin Kelly has a good article at Wired.com about our robotic future. He writes about our ability to invent new things to do as our old activities are replaced by machines:
Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. Two hundred years ago not a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.
Apparently Zaha Hadid is working on a new building in China and it’s being pirated … AS SHE’S BUILDING THE ORIGINAL. This sounds like a weird William Gibson future world:
But the appeal of the Prtizker Prize winner’s experimental architecture, especially since the unveiling of her glowing, crystalline Guangzhou Opera House two years ago, has expanded so explosively that a contingent of pirate architects and construction teams in southern China is now building a carbon copy of one of Hadid’s Beijing projects.
What’s worse, Hadid said in an interview, she is now being forced to race these pirates to complete her original project first.
[Via Ed Cotton]
Walking around Tokyo today I passed a Bathing Ape store on got onto the topic of how the brand came to be. After a little Googling I ran across this excellent article that documents the fall of the brand and eventually to this interesting theory on “cultural arbitrage”:
The hipster elite are starting to show annoyance at this development. Former mo wax guru James Lavelle, quoted in Tokion, lamented that it is now impossible to stay “underground.” Lavelle and his kindred folk profit from exploiting cultural arbitrage: taking information from inaccessible sources and cashing in on that unequal access to information. (In general, a lot of people whom you probably think are cooler than you make a bulk of their money from this inequality in information.) No one in the West knew that Bape is a mainstream brand in Japan, and therefore, Lavelle was able to subtly and indirectly create the brand image to his own liking…* Now, with the high speed “information superhighway,” profit from cultural arbitrage business looks doubtful in the long run.
It’s not revolutionary, but it’s a nice way to think about how culture moves.
* I had to cut out a few sentences because they talk about how financial arbitrage used to work but no longer does, which just isn’t true.
The New Yorker has a really interesting blog post about how the 2nd amendment came to mean what many now believe it to mean. Turns out we didn’t always see things the way we do:
Enter the modern National Rifle Association. Before the nineteen-seventies, the N.R.A. had been devoted mostly to non-political issues, like gun safety. But a coup d’état at the group’s annual convention in 1977 brought a group of committed political conservatives to power—as part of the leading edge of the new, more rightward-leaning Republican Party. (Jill Lepore recounted this history in a recent piece for The New Yorker.) The new group pushed for a novel interpretation of the Second Amendment, one that gave individuals, not just militias, the right to bear arms. It was an uphill struggle. At first, their views were widely scorned. Chief Justice Warren E. Burger, who was no liberal, mocked the individual-rights theory of the amendment as “a fraud.”
The article goes on to explain how interesting it is that this represents a “living” constitution that adapts with the times, something conservatives generally fight against:
But the N.R.A. kept pushing—and there’s a lesson here. Conservatives often embrace “originalism,” the idea that the meaning of the Constitution was fixed when it was ratified, in 1787. They mock the so-called liberal idea of a “living” constitution, whose meaning changes with the values of the country at large. But there is no better example of the living Constitution than the conservative re-casting of the Second Amendment in the last few decades of the twentieth century. (Reva Siegel, of Yale Law School, elaborates on this point in a brilliant article.)
I’ve always kind of wondered what made cashmere so much more expensive than wool other than the fact it’s softer. Slate has an answer:
Its costly production process and scarcity. Cashmere comes from the soft undercoat of goats bred to produce the wool. It takes more than two goats to make a single two-ply sweater. The fibers of the warming undercoat must be separated from a coarser protective top coat during the spring molting season, a labor-intensive process that typically involves combing and sorting the hair by hand. These factors contribute to the relatively low global production rate of cashmere—approximately 30,000 pounds a year compared to about 3 billion pounds of sheep’s wool.
So there you have it. Undercoats is the answer.
« Older posts | Newer posts »