Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

The Consequences of Time

I’ve been trying to get through my Instapaper backlog lately. It’s a kind of New Years resolution thing, but mostly a reaction to reading books for awhile. That’s not all that important except to explain why I’ll probably be posting some old stuff over the coming weeks.

Anyway, I was struck reading this post from 2009 by Kevin Kelly on technology and how he explained the clock in a very McLuhan’esque way:

Seemingly simple inventions like the clock had profound social consequences. The clock divvied up an unbroken stream of time into measurable units, and once it had a face, time became a tyrant, ordering our lives. Danny Hillis, computer scientist, believes the gears of the clock spun out science, and all it’s many cultural descendents. He says, “The mechanism of the clock gave us a metaphor for self-governed operation of natural law. (The computer, with its mechanistic playing out of predetermined rules, is the direct descendant of the clock.) Once we were able to imagine the solar system as a clockwork automaton, the generalization to other aspects of nature was almost inevitable, and the process of Science began.”

January 10, 2013 // This post is about: , ,

Jacobs v. Moses

One of the best ways to judge just how interesting something really is is to see whether you’re still thinking about it days from then. Anyway, another stop on my Instapaper archaeology was this excellent New Republic book review that talks about the relationship between the work of Jane Jacobs and Robert Moses. It’s a pretty balanced affair that suggests that Jacobs may not have been as perfect an urban planner as she has since been painted and Moses may not have been the devil incarnate. I’ll leave the conclusion for you to read on your own, but here’s a quick snippet on where Jacobs doesn’t necessarily work for the realities of the city:

The Death and Life of Great American Cities argues that at least one hundred homes per acre are necessary to support exciting stores and restaurants, but that two hundred homes per acre is a “danger mark.” After that point of roughly six-story buildings, Jacobs thought that neighborhoods risked sterile standardization. (The one public housing project that Jacobs blessed, at least initially, had only five stories.) But keeping great cities low means that far too few people can enjoy the benefits of city life. Jacobs herself had the strange idea that preventing new construction would keep cities affordable, but a single course in economics would have taught her the fallacy of that view. If booming demand collides against restricted supply, then prices will rise.

January 10, 2013 // This post is about: , , ,

A Coffee Drought?

This paragraph from The Awl on the possibility of a coffee drought wins the day:

Can you imagine? Think about how unpleasant people are already, with coffee. Think about how unpleasant people are about coffee. And I’m not even talking about your garden-variety dickheads who debate the merits of pour-over brew versus the Estonian flatiron reverse-osmosis method, which is probably a thing even though I just made it up. I’m talking about the people who are all, “I can’t start the day without coffee,” as if the rest of us aren’t just as tired and irritable without feeling the apparently deep-seated need to broadcast just how dependent we are on hot water dripped through crushed beans to help us contend with the arduous tasks of getting to work and turning on a computer. These are the people we’re going to have to club to death first during our grim, coffeeless future, which the New Scientist> (registration required) sees as coming “by 2080.” Oh, wait, 65 years? We’ll all be long dead by then. Never mind.

January 9, 2013 // This post is about: ,

Notes

The Chronicle of Higher Education has an interesting article on notes, which, as the article points out, is something we all constantly interact with and seldom discuss. Here’s a bit on digital note-taking systems:

Digital note-taking systems were a direct outgrowth of the early hypertext knowledge-representation systems. I had my first encounter with one of those when I arrived at the Xerox Palo Alto Research Center in the mid-1980s. In addition to their better-known innovations (the laser printer, the WYSIWYG text editor, the graphical user interface, the Ethernet), the center’s researchers developed the system Notecards. It was a thing of wonder, back when the computer could still induce that feeling. You could create notecards containing text or graphics, sort them into file boxes, and link them according whatever relationship you chose (“source,” “example,” etc.), while navigating the whole network via an overview in a browser window. It was as close as you could come to a digital implementation of Placcius’s cabinet, freed from the material constraints of slips, hooks, and drawers and from the requirement that each slip fill only one slot in a network.

Two little bits on this: First, reading through this made me think a lot about this blog, which I’ve always sort of thought of as a notebook. Posts here are much more often notes in margins than they are fully-formed ideas. Second, it makes me think of an article I’ve read over a bunch of times on how Steven Johnson uses a tool called DevonThink to help him write books.

Finally, this line in the essay made me laugh: “The Post-it ranks as one of modern chemistry’s two major contributions to the work of annotation, as partial reparation for the highlighter pen, the colorist’s revenge on the printed page.”

January 9, 2013 // This post is about: , ,

Making It

I like this little story on Quora from Stewart Butterfield, one of the co-founders of Flickr. In response to why the company dropped the “e”, he explains it was because the guy who owned the flicker.com domain wouldn’t sell. But then he goes on to give this extra anecdote:

Bonus story: for a long time when I searched Google for “flickr” I got a “Did you mean flicker?” suggestion. I knew we’d have “made it” when that stopped. Eventually that message did stop showing up … and by 2005 or 2006 the search results page even asked “Did you mean flickr?” when searching for “flicker”. That’s when I knew it was big! (Google seems to have stopped doing that since.)

It would be great to collect the stories from all the founders who saw their products go big about when they knew they had “made it”.

January 8, 2013 // This post is about: , ,

Idiots & Taking the Long View

I’ve been listening to a lot of podcasts lately, and one of them is New Yorker’s Out Loud. The last episode featured a great interview with Daniel Mendelsohn, a literary critic. In the podcast he mostly talks about the books that inspired him to become a writer, but then, towards the end, he talks a bit about the job of a cultural critic and I thought what he had to say was interesting enough to transcribe and share:

We now have these technologies that simulate reality or create different realities in very sophisticated and interesting ways. Having these technologies available to us allows us to walk, say, through midtown Manhattan but actually to be inhabiting our private reality as we do so: We’re on the phone or we’re looking at our smartphone, gazing lovingly into our iPhones. And this is the way the world is going, there’s no point complaining about it. But where my classics come in is I am amused by the fact our word idiot comes from the Greek word idiotes, which means a private person. It’s from the word idios, which means private as opposed to public. So the Athenians, or the Greeks in general who had such a highly developed sense of the readical distinction between what went on in public and what went on in private, thought that a person that brought his private life into public spaces, who confused public and private was an idiote, was an idiot. Of course, now everybody does this. We are in a culture of idiots in the Greek sense. To go back to your original question, what does this look like in the long run? Is it terrible or is it bad? It’s just the way things are. And one of the advantages about being a person who looks at long stretches of the past is you try not to get hysterical, to just see these evolving new ways of being from an imaginary vantage point in the future. Is it the end of the world? No, it’s just the end of a world. It’s the end of the world I grew up in when I was thinking of how you behaved in public. I think your job as a cultural critic is to take a long view.

I obviously thought the idiot stuff was fascinating, but also was interested in his last line about the job of a cultural critic, which, to me, really reflected something that struck me about McLuhan in the most recent biography of his by Douglas Coupland:

Marshall was also encountering a response that would tail him the rest of his life: the incorrect belief that he liked the new world he was describing. In fact, he didn’t ascribe any moral or value dimensions to it at all–he simply kept on pointing out the effects of new media on the individual. And what makes him fresh and relevant now is the fact that (unlike so much other new thinking of the time) he always did focus on the individual in society, rather than on the mass of society as an entity unto itself.

January 7, 2013 // This post is about: , , , , , ,

Context, Identity and the New UC Logo

I’m guessing you heard about this, but earlier this year the University of California introduced a new identity system. It looked something like this:

uni_of_california_monogram_detail

 

As people are wont to do, they freaked out. In fact, they freaked out enough that the University eventually decided to drop the new logo. Now, with the controversy in the rearview mirror, I’ve read/listened to a few post-mortems on how and why something like this happened and I felt like chiming in. My credentials, like most commenters, are pretty thin, but I think they give me an interesting perspective. Beyond spending a sort of ridiculous amount of time thinking about brands, overseeing a product team including three designers and previously working in advertising overseeing creative teams for some time, I also built Brand Tags, the largest free database of perceptions about brands. I am, however, not a designer.

That last bit, especially, shapes my perception on conversations about design.

Okay, with disclosures behind us, a bit more background: When this new logo was introduced to the public (though apparently it had been on a roadshow for some time before it showed up on the web), it was misinterpreted as a replacement for the official seal of the University of California system. That seal looks like this:

uc_seal

This, apparently, was inaccurate. The new logo would not be replacing the seal, but rather helping to unify the various logos that had popped up across the different UC schools (the script Cal and UCLA logos are two examples). As occasionally happens the digerati spread an idea that wasn’t true. I know this isn’t shocking, but to be fair to all the bloggers on this one, the University hardly helped its case when it produced this video as a companion piece to explain the new identity:

I know this all seems like a slightly exhaustive bit of background, especially if you’ve been following this story, but I think it’s all important. In a long piece on RockPaperInk which spurred this piece, Christopher Simmons, a designer and former AIGA president, writes:

“Designers too often judge logos separate from their system…without understanding that one can’t function without the other,” criticized Paula Scher when I asked her views on the controversy, “It’s the kit of parts that creates a contemporary visual language and makes an identity recognizable, not just the logo. But often the debate centers on whether or not someone likes the form of the logo, or whether the kerning is right.” While acknowledging that all details are important, Scher also calls these quibbles “silly.” “No designer on the outside of the organization at hand is really qualified to render an informed opinion about a massive identity system until it’s been around and in practice for about a year,” she explains, “One has to observe it functioning in every form of media to determine the entire effect. This [was] especially true in the UC case.”

Which I mostly agree with. Logos don’t exist outside the system (for the most part) and, even more importantly, they don’t exist outside the collective consciousness they grow up in. This is something I got in quite a few arguments about while I was running Brand Tags. I would get an email from a company no one had ever heard of asking for me to post their logo, to which I invariably responded “no”. My reasoning, as I explained at the time, was that the point of the site was to measure brand perception and for people to have a perception, you need a brand, which you don’t have if no one knows who you are. Brands, as I’ve expressed in the past, live in people’s heads. They are the sum total of perceptions about them.

This is part of what makes it so tough to judge any sort of logo: Lack of context. Even if you see the way the system works, you don’t have the rest of the context that would come with experiencing it in the wild. If you’re a high school senior and the new UC logo is on a sweatshirt worn by the girl you had a crush on that’s home for her freshman Christmas break it’s going to have a very different meaning than if you’re first encounter is in the US News & World Reports list of top US universities. Context shapes experience and we can’t forget that.

Which makes something Simmons writes later so confusing for me:

Design as a discipline is challenged by this notion of democracy, particularly in a viral age. We have become a culture mistrustful of expertise—in particular creative expertise. I share [UC Creative Director] Correa’s fear that this cultural position stifles design as designers increasingly lose ownership of the discourse. “If deep knowledge in these fields is weighed against the “likes” and “tastes” of the populace at large,” she warns, “We will create a climate that does not encourage visual or aesthetic exploration, play or inventiveness, since the new is often soundly refused.”

Most of the article, actually, is blaming the public (and designers specifically) for the way they misinterpreted and criticized the logo. That truth, however, is at least in part due to the context they experienced the logo in. It’s near impossible, for instance, to not walk away from that introductory video believing that the logo is replacing the seal and that was produced by the University itself. Design, I’d posit, is about far more than the logo or even the system, it’s the story that exists around the brand as a whole and the designer is, at least in part, responsible for how that story is told. I agree with part of what’s written above: Design is a tough discipline because everyone has an opinion. But that’s not really new and it’s been lamented to death. People know what fonts are and many have heard of kerning or played with Photoshop. This is just the reality we live in. We can choose to ignore that reality and think we can put things out in the world without hearing from many people who are “unqualified” to have opinions or we can acknowledge that and try to spend as much time thinking about the context people are first experiencing new identities as we spend on the identities themselves. It’s not a simple solution, but it’s a whole lot more sustainable.

Finally, we need to recognize that with this new world we all live in, where everyone has an opinion about everything (let’s not pretend that design is the only victim to this reality), that its going to be harder than ever to stand behind convictions. On the one hand this can mean “a climate that does not encourage visual or aesthetic exploration, play or inventiveness,” as the UC Creative Director says, or it can mean that we need to do more to educate everyone involved in the decision-making process of what’s to come. We need to help them understand the design process, the effect of context and the potential for backlash (with our plan on how to deal with it).

Or we can do boring stuff.

Though I didn’t quote it anywhere here, a lot of my thinking in this piece was shaped by the very even coverage on this issue from 99% Invisible, which I would highly recommend listening to.

January 6, 2013 // This post is about: , , , ,

Top Longform of 2012

Last year I listed out my five favorite pieces of longform writing and it seemed to go over pretty well, so I figured I’d do the same again this year. It was harder to compile the list this year, as my reading took me outside just Instapaper (especially to the fantastic Longform app for iPad), but I’ve done my best to pull these together based on what I most enjoyed/found most interesting/struck me the most.

One additional note before I start my list: To make this process slightly more simple next year I’ve decided to start a Twitter feed that pulls from my Instapaper and Readability favorites. You can find it at @HeyItsInstafavs. Okay, onto the list.

  1. The Yankee Comandante (New Yorker): Last year David Grann took my top spot with A Murder Foretold and this year he again takes it with an incredible piece on William Morgan, an American soldier in the Cuban revolution. The article was impressive enough that George Clooney bought up the rights and is apparently planning to direct a film about the story. The thing about David Grann is that beyond being an incredible reporter and storyteller, he’s also just an amazing writer. I’m not really a reader who sits there and examines sentences, I read for story and ideas. But a few sentences, and even paragraphs, in this piece made me take notice. While we’re on David Grann, I also read his excellent book of essays this year (most of which come from the New Yorker), The Devil & Sherlock Holmes. He is, without a doubt, my favorite non-fiction writer working right now.
  2. Raise the Crime Rate (n+1): This article couldn’t be more different than the first. Rather than narrative non-fiction, this is an interesting, and well-presented, arguments towards abolishing the prison system. The basic thesis of the piece is that we’ve made a terrible ethical decision in the US to offload crime from our cities to our prisions, where we let people get raped and stabbed with little-to-no recourse. The solution presented is to abolish the prison system (while also increasing capital punishment). Rare is an article that you don’t necessarily agree with, but walk away talking and thinking about. That’s why this piece made my list. I read it again last week and still don’t know where I stand, but I know it’s worthy of reading and thinking about. (While I was trying to get through my Instapaper backlog I also came across this Atul Gawande piece from 2009 on solitary confinement and its effects on humans.)
  3. Open Your Mouth & You’re Dead (Outside): A look at the totally insane “sport” of freediving, where athletes swim hundreds of feet underwater on a single breath (and often come back to the surface passed out). This is scary and crazy and exciting and that’s reason enough to read something, right?
  4. Jerry Seinfeld Intends to Die Standing Up (New York Times): I’ve been meaning to write about this but haven’t had a chance yet. Last year HBO had this amazing special called Talking Funny in which Ricky Gervais, Chris Rock, Louis CK and Jerry Seinfeld sit around and chat about what it’s like to be the four funniest men in the world. The format was amazing: Take the four people who are at the top of their profession and see what happens. But what was especially interesting, to me at least, was the deference the other three showed to Seinfeld. I knew he was accomplished, but I didn’t know that he commanded the sort of respect amongst his peers that he does. Well, this Times article expands on that special and explains what makes Seinfeld such a unique comedian and such a careful crafter of jokes. (For more Seinfeld stuff make sure to check out his new online video series, Comedians in Cars Getting Coffee, which is just that.)
  5. The Malice at the Palace (Grantland): I would say as a publication Grantland outperformed just about every other site on the web this year and so this pick is part acknowledgement of that and part praise for a pretty amazing piece of reporting (I guess you could call an oral history that, right?). Anyway, this particular oral history is about the giant fight that broke out in Detroit at a Pacers v. Pistons game that spilled into a fight between the Pistons and the Detroit fans. It was an ugly mark for basketball and an incredibly memorable (and insane) TV event. As a sort of aside on this, I’ve been casually reading Bill Simmons’ Book of Basketball and in it he obviously talks about this game/fight. In fact, he calls it one of his six biggest TV moments, which he judges using the following criteria: “How you know an event qualifies: Will you always remember where you watched it? (Check.) Did you know history was being made? (Check.) Would you have fought anyone who tried to change the channel? (Check.) Did your head start to ache after a while? (Check.) Did your stomach feel funny? (Check.) Did you end up watching about four hours too long? (Check.) Were there a few ‘can you believe this’–type phone calls along the way? (Check.) Did you say ‘I can’t believe this’ at least fifty times?” I agree with that.

And, like last year, there are a few that were great but didn’t make the cut. Here’s two more:

  • Snow Fall (New York Times): Everyone is going crazy about this because of the crazy multimedia experience that went along with it, but I actually bought the Kindle single and read it in plain old black and white and it was still pretty amazing. Also, John Branch deserves to be on this list because he wrote something that would have made my list last year had it not come out in December: Punched Out is the amazing and sad story of Derek Boogaard and what it’s like to be a hockey enforcer.
  • Marathon Man (New Yorker): A very odd, but intriguing, “expose” on a dentist who liked to chat at marathons.

That’s it. I’ve made a Readlist with these seven selections which makes it easy to send them all to your Kindle or Readability. Good reading.

January 4, 2013 // This post is about: , , ,

A little extra context

I haven’t seen Django Unchained yet (though I want to, and I loved Inglorious Basterds), but I found this insight into Tarantino’s process very interesting. From a New York Times interview with the director:

I have a writer’s journey going on and a filmmaker’s journey going on, and obviously they’re symbiotic, but they also are separate. When I write my scripts it’s not really about the movie per se, it is about the page. It’s supposed to be literature. I write stuff that’s never going to make it in the movie and stuff that I know wouldn’t even be right for the movie, but I’ll put it in the screenplay. We’ll decide later do we shoot it, do we not shoot it, whatever, but it’s important for the written work.

I think about this at Percolate sometimes and always err on the side of over-documentation. I like the idea of building a narrative around something that extends far beyond what’s necessary, as the additional context creates an important background for decisions. In Tarantino’s case, I have to imagine part of the reason he gets such good performances out of the actors in his films is that they’re given such a rich text to work with.

January 4, 2013 // This post is about: , , ,

Limited Willpower

More on decisions, this time about how our ability to make them is actually a finite resource:

Willpower—the popular idea is that it’s something that you use to resist temptation and to make yourself work. But they’ve also found that this same energy is used in making decisions, simply deciding what to have for lunch, what to do at a meeting; all these things deplete the same resource. After a while, when you’ve depleted this resource, it’s a state called ego depletion. You’ve got less self-control, you’re more prone to give in to temptation, it’s harder for you to work, and you tend to make worse decisions. 

January 3, 2013 // This post is about: ,

Decisionless

As I was digging through my old Instapapers while I was away (I read like a madman and hardly got through any), I came across this article about Obama from 2010. This little story about trying to make fewer decisions really struck me:

Rahm Emanuel tells a story. The time is last December, when the White House was juggling an agenda that included the Afghanistan troop surge, the health-care bill, the climate talks in Copenhagen, and Obama’s acceptance of a Nobel Peace Prize that threatened to do him more political harm than good—one issue on top of another. It got to the point where Obama and Emanuel would joke that, when it was all over, they were going to open a T-shirt stand on a beach in Hawaii. It would face the ocean and sell only one color and one size. “We didn’t want to make another decision, or choice, or judgment,” Emanuel told me. They took to beginning staff meetings with Obama smiling at Emanuel and simply saying “White,” and Emanuel nodding back and replying “Medium.”

It’s especially interesting when you add this nugget from Michael Lewis’s October piece on the president (which I haven’t read yet, but this quote came across my internets somehow):

“You’ll see I wear only gray or blue suits,” he said. “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” He mentioned research that shows the simple act of making decisions degrades one’s ability to make further decisions. It’s why shopping is so exhausting. “You need to focus your decision-making energy. You need to routinize yourself. You can’t be going through the day distracted by trivia.”

January 3, 2013 // This post is about: , , ,

Blesh

I was reading this New Yorker piece about the Grateful Dead at my friend Colin’s recommendation and I liked the notion of “blesh”:

“More Than Human” is a sci-fi novel, published in 1953, in which a band of exceptional people “blesh” (that is, blend and mesh) their consciousness to create a kind of super-being. “I turned everyone on to that book in, like, 1965,” Lesh said. “ ‘This is what we can do; this is what we can be.’”

Which reminded me a bit of scenius:

The musician and artist Brian Eno coined the odd but apt word “scenius” to describe the unusual pockets of group creativity and invention that emerge in certain intellectual or artistic scenes: philosophers in 18th-century Scotland; Parisian artists and intellectuals in the 1920s. In Eno’s words, scenius is “the communal form of the concept of the genius.” New York hasn’t yet reached those heights in terms of internet innovation, but clearly something powerful has happened. There is genuine digital-age scenius on its streets. This is good news for my city, of course, but it’s also an important case study for any city that wishes to encourage innovative business. How did New York pull it off?

January 3, 2013 // This post is about: , , , ,

It’s All Robots

Kevin Kelly has a good article at Wired.com about our robotic future. He writes about our ability to invent new things to do as our old activities are replaced by machines:

Before we invented automobiles, air-conditioning, flatscreen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. Two hundred years ago not a single citizen of Shanghai would have told you that they would buy a tiny slab that allowed them to talk to faraway friends before they would buy indoor plumbing. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers—a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. Each successful bit of automation generates new occupations—occupations we would not have fantasized about without the prompting of the automation.

January 1, 2013 // This post is about: , ,

Pirated Architecture

Apparently Zaha Hadid is working on a new building in China and it’s being pirated … AS SHE’S BUILDING THE ORIGINAL. This sounds like a weird William Gibson future world:

But the appeal of the Prtizker Prize winner’s experimental architecture, especially since the unveiling of her glowing, crystalline Guangzhou Opera House two years ago, has expanded so explosively that a contingent of pirate architects and construction teams in southern China is now building a carbon copy of one of Hadid’s Beijing projects.
What’s worse, Hadid said in an interview, she is now being forced to race these pirates to complete her original project first.

[Via Ed Cotton]

December 28, 2012 // This post is about: , , ,

Cultural Arbitrage

Walking around Tokyo today I passed a Bathing Ape store on got onto the topic of how the brand came to be. After a little Googling I ran across this excellent article that documents the fall of the brand and eventually to this interesting theory on “cultural arbitrage”:

The hipster elite are starting to show annoyance at this development. Former mo wax guru James Lavelle, quoted in Tokion, lamented that it is now impossible to stay “underground.” Lavelle and his kindred folk profit from exploiting cultural arbitrage: taking information from inaccessible sources and cashing in on that unequal access to information. (In general, a lot of people whom you probably think are cooler than you make a bulk of their money from this inequality in information.) No one in the West knew that Bape is a mainstream brand in Japan, and therefore, Lavelle was able to subtly and indirectly create the brand image to his own liking…* Now, with the high speed “information superhighway,” profit from cultural arbitrage business looks doubtful in the long run.

It’s not revolutionary, but it’s a nice way to think about how culture moves.

* I had to cut out a few sentences because they talk about how financial arbitrage used to work but no longer does, which just isn’t true.

December 28, 2012 // This post is about: , , ,

How We Got Here: Second Amendment Edition

The New Yorker has a really interesting blog post about how the 2nd amendment came to mean what many now believe it to mean. Turns out we didn’t always see things the way we do:

Enter the modern National Rifle Association. Before the nineteen-seventies, the N.R.A. had been devoted mostly to non-political issues, like gun safety. But a coup d’état at the group’s annual convention in 1977 brought a group of committed political conservatives to power—as part of the leading edge of the new, more rightward-leaning Republican Party. (Jill Lepore recounted this history in a recent piece for The New Yorker.) The new group pushed for a novel interpretation of the Second Amendment, one that gave individuals, not just militias, the right to bear arms. It was an uphill struggle. At first, their views were widely scorned. Chief Justice Warren E. Burger, who was no liberal, mocked the individual-rights theory of the amendment as “a fraud.”

The article goes on to explain how interesting it is that this represents a “living” constitution that adapts with the times, something conservatives generally fight against:

But the N.R.A. kept pushing—and there’s a lesson here. Conservatives often embrace “originalism,” the idea that the meaning of the Constitution was fixed when it was ratified, in 1787. They mock the so-called liberal idea of a “living” constitution, whose meaning changes with the values of the country at large. But there is no better example of the living Constitution than the conservative re-casting of the Second Amendment in the last few decades of the twentieth century. (Reva Siegel, of Yale Law School, elaborates on this point in a brilliant article.)

December 28, 2012 // This post is about: , , , , , ,

What makes cashmere so expensive?

I’ve always kind of wondered what made cashmere so much more expensive than wool other than the fact it’s softer. Slate has an answer:

Its costly production process and scarcity. Cashmere comes from the soft undercoat of goats bred to produce the wool. It takes more than two goats to make a single two-ply sweater. The fibers of the warming undercoat must be separated from a coarser protective top coat during the spring molting season, a labor-intensive process that typically involves combing and sorting the hair by hand. These factors contribute to the relatively low global production rate of cashmere—approximately 30,000 pounds a year compared to about 3 billion pounds of sheep’s wool.

So there you have it. Undercoats is the answer.

December 28, 2012 // This post is about: , ,

On Zero Dark Thirty

Before I left for my trip to Asia I went to see Zero Dark Thirty, the movie about the hunt for, and ultimately killing of, Osama Bin Laden. Before, and after, seeing it I had read quite a bit about the raid, the movie and the controversy around both. I thought maybe it would be worth collecting all this stuff into a post, so that’s what I’m doing.

First, on the movie itself. A lot of people really like it (the most interesting point Denby makes in this podcast is the idea that this and Lincoln spell the end of auteur theory as they show the power of the writer/director combo). I thought it was pretty okay. In reading around, I think Roger Ebert sums up my opinions best in his review of the film:

My guess is that much of the fascination with this film is inspired by the unveiling of facts, unclearly seen. There isn’t a whole lot of plot — basically, just that Maya thinks she is right, and she is. The back story is that Bigelow has become a modern-day directorial heroine, which may be why this film is winning even more praise than her masterful Oscar-winner “The Hurt Locker.” That was a film firmly founded on plot, character and actors whose personalities and motivations became well-known to the audience. Its performances are razor-sharp and detailed, the acting restrained, the timing perfect.

In comparison, “Zero Dark Thirty” is a slam-bang action picture, depending on Maya’s inspiration. One problem may be that Maya turns out to be correct, with a long, steady build-up depriving the climax of much of its impact and providing mostly irony. Do we want to know more about Osama bin Laden and al Qaida and the history and political grievances behind them? Yes, but that’s not how things turned out. Sorry, but there you have it.

One thing that I found particularly interesting in the film was the very short sequence on the doctor who had gone around Abbottabad under the cover of vaccination who was actually collecting DNA. I remembered reading about him in the original New Yorker account of the raid and thought that had made clear he had been successful in collected DNA evidence (it turns out the article says he wasn’t, the same way it’s presented in the film). January’s GQ has a longer account of what happened to the doctor who helped the CIA and tries to get at whether he was successful in his mission. (The answers: He was tortured/imprisioned by the Pakistani government for assisting the Americans and, as to whether he got evidence, it’s still unclear.)

If you’re interested in more reading on the subject, No Easy Day, an account by a Navy Seal on the mission is a fast and interesting read. And although I haven’t read it, my friend Colin Nagy highly recommends The Triple Agent, which covers what happened at Khost, where a Jordanian triple agent beat CIA intelligence and security to bomb a military base and kill a sizable group of CIA operatives (there’s a scene in Zero Dark Thirty about it, though the film offers no real depth on what happened).

December 27, 2012 // This post is about: , , ,

Gun Owners versus the NRA

This New England Journal of Medicine editorial has a really interesting stat I haven’t seen anywhere else about how gun owners feel about gun laws:

These proposals enjoy broad support. In fact, public-opinion polls have shown that 75 to 85% of firearm owners, including specifically members of the National Rifle Association (NRA) in some cases, endorse comprehensive background checks and denial for misdemeanor violence; 60 to 70% support denial for alcohol abuse. (It is deeply ironic that our current firearm policies omit regulations that are endorsed by firearm owners, let alone by the general public.)

Unfortunately there’s no citation, but I’d be really interested to know how aligned gun owners are with the NRA.

December 27, 2012 // This post is about: , ,

Diving Chess

This is just strange:

Diving Chess is a chess variant, which is played in a swimming pool. Instead of using chess clocks, each player must submerge themselves underwater during their turn, only to resurface when they are ready to make a move. Players must make a move within 5 seconds of resurfacing (they will receive a warning if not, and three warnings will result in a forfeit). Diving Chess was invented by American Chess Master Etan Ilfeld; the very first exhibition game took place between Ilfeld and former British Chess Champion William Hartston at the Thirdspace gym in Soho on August 2nd, 2011. Hartston won the match which lasted almost two hours such that each player was underwater for an entire hour.

Via Statistical Modeling, Causal Inference, and Social Science

December 26, 2012 // This post is about: , , ,

Sheer Malice: A Doctor’s Take on Home Alone

More fun Christmasy stuff, this time it’s from The Week and comes in the form of a doctor examining the true extent of the injuries to the burglars in Home Alone. I’m partial to his explanation of the effect of the burning-hot doorknob:

If this doorknob is glowing visibly red in the dark, it has been heated to about 751 degrees Fahrenheit, and Harry gives it a nice, strong, one- to two-second grip. By comparison, one second of contact with 155 degree water is enough to cause third degree burns. The temperature of that doorknob is not quite hot enough to cause Harry’s hand to burst into flames, but it is not that far off… Assuming Harry doesn’t lose the hand completely, he will almost certainly have other serious complications, including a high risk for infection and ‘contracture’ in which resulting scar tissue seriously limits the flexibility and movement of the hand, rendering it less than 100 percent useful. Kevin has moved from ‘defending his house’ into sheer malice, in my opinion.

[Via Consumerist]

December 24, 2012 // This post is about: , , ,

What’s happening to malls?

Interesting perspective (and data) on the effect of online retailing and the general environment on malls:

I agree with the above perspectives, although I believe they likely understate the eventual impact on malls.  A report from Co-Star observes that there are more than 200 malls with over 250,000 square feet that have vacancy rates of 35% or higher, a “clear marker for shopping center distress.”  These malls are becoming ghost towns.  They are not viable now and will only get less so as online continues to steal retail sales from brick-and-mortar stores.  Continued bankruptcies among historic mall anchors will increase the pressure on these marginal malls, as will store closures from retailers working to optimize their business.  Hundreds of malls will soon need to be repurposed or demolished.  Strong malls will stay strong for a while, as retailers are willing to pay for traffic and customers from failed malls seek offline alternatives, but even they stand in the path of the shift of retail spending from offline to online.

Living in New York it’s easy to forget the importance of malls in retail. Haven’t ever completely understood why that is exactly, but the malls in New York (the only two I can think of off the top of my head are South Street Seaport and Herald Square) feel like afterthoughts and are filled with stores that feel out of place in an otherwise retail-hungry city.

December 24, 2012 // This post is about: , , , , ,

Santa’s Privacy Policy

Too good not to share: McSweeney’s has Santa’s Privacy Policy (originally published in 2010). A snippet:

The letters also provide another important piece of information—fingerprints. We run these through databases maintained by the FBI, CIA, NSA, Interpol, MI6, and the Mossad. If we find a match, it goes straight on the Naughty List. We also harvest a saliva sample from the flap of the envelope in which the letter arrives in order to establish a baseline genetic identity for each correspondent. This is used to determine if there might be an inherent predisposition for naughtiness. A detailed handwriting analysis is performed as part of a comprehensive personality workup, and tells us which children are advancing nicely with their cursive and which are still stubbornly forming block letters with crayons long past the age when this is appropriate.

December 24, 2012 // This post is about: , , ,

The Evolution of Words

I really enjoyed this FT review of a few books on the origin of words and misspellings. Especially interesting was this note on how dictionaries came to represent current language:

Why did the editors of Webster’s Third drop this lexicographic A-bomb (another addition to the dictionary)? Because views on dictionaries, indeed on language itself, had changed. Instead of laying down rules on how people should write and speak, dictionaries became records of how people did write and speak. And that meant all the people, not just those who spoke the educated language of New England. The new trends in lexicography went along with the growth of scientific method and Charles Darwin’s theory of evolution: lexicographers observed what was happening to the language, rather than handing down precepts. 

December 23, 2012 // This post is about: , , , ,

Where we come from this is small talk

My sister sent me this link to the ten best Muppet Christmas moments and it was conspicuously missing my all-time favorite Muppet moment from A Muppet Family Christmas. All the Muppets turn up at Fozzie’s mom’s house for Christmas even though Doc (from Fraggle Rock) was renting it as a quiet escape. As Bert and Ernie come in this conversation happens between the three of them:

Ernie: Oh, hi there, we’re Ernie and Bert.
Doc: Well, hi there yourself, I’m Doc.
Bert: Oh, did you know that Doc starts with the letter D?
Doc: Why, yes.
Ernie: Yes! Yes, starts with the letter Y.
Doc: True.
Ernie: And true starts with the letter T.
Doc: What is this?
Bert: Where we come from this is small talk.

That line gets me every time.

December 19, 2012 // This post is about: , ,

Picking up your language

This was a little tidbit I picked up at the Most Contagious event in London. Turns out EA’s FIFA 13 is set up to work with Xbox. Specifically, if you swear at the TV while you’re playing the game you’ll be penalized:

But the system will also listen for players whose frustration gets the better of them. Swearing at the referee will influence their decision-making, possibly leading to more bookings. EA says that in Fifa 13’s career mode, gamers will find that storylines will develop if they acquire a reputation for abusing referees.

December 17, 2012 // This post is about: , ,

The Sports Disaster Plan

I’ve been reading Bill Simmons’ Book of Basketball and in one of the footnotes he mentioned that the NBA (and all the other sports leagues) has a contingency plan in the case of a team losing all its players to a horrific accident. I guess it’s not surprising, but it’s kind of crazy to read the rules from this 1992 New York Times article. Here’s how it would work in basketball:

The National Basketball Association has a contingency plan that goes into effect if five or more players on any team “die or are dismembered,” according to Rod Thorn, the league’s operations director. The league would permit only five players on every other club to be protected, insuring that a fairly good player — the sixth best — could be drafted by the club suffering the tragedy. Each of the contributing clubs could lose only one player.

December 17, 2012 // This post is about: , , , ,

Selling Enterprise Software via BBS

I read the crazy Wired essay/Kindle Single John McAfee’s Last Stand about how the guy who made antivirus software famous ended up wanted by the government in Belize. It’s a wild, but not all that interesting, tale, however, I found this snippet about how he started selling his software very interesting:

He started McAfee Associates out of his 700-square-foot home in Santa Clara. As a hobby, he had been running an electronic bulletin board out of a corner of his living room. He had four phone lines patched to a computer that anybody could dial in to and upload or download comments and software. His business plan: create an antivirus program and give it away on his bulletin board. McAfee didn’t expect users to pay for it. His real aim was to get them to think the software was so necessary that they would install it on their computers at work. They did. Within five years, half the Fortune 100 companies were running it, and they felt compelled to pay the licensing fees. By 1990 McAfee was making $5 million a year with very little overhead or investment.

Company’s like Yammer have been celebrated in the software industry for introducing a new, and very interesting, model wherein you sell to individuals first and then get enterprises to buy once there’s scale. Cool to see this has actually been around for awhile.

December 16, 2012 // This post is about: , , , ,

Jumpstarting a Market

I’ve written in the past about what market leaders do to build categories, and frequently I cite Google as the best example of these strategies. Their approach with laying down fiber and providing really cheap, super fast internet in Kansas City is no exception. Like it did with Chrome (at least at the beginning), Google is trying to jumpstart a stagnant market:

If you are one of the lucky few Kansas City natives to have already signed up for Google Fiber, I don’t begrudge you one megabit; your ancestors had to deal with the Dust Bowl, you deserve a little extra bandwidth. But at its heart, Google’s attempt at being its own ISP is much more about forcing the entrenched service providers — the Verizon’s and Time Warner’s and AT&T’s of this world — to step up their games than it is about making this particular business a raving financial success. When I asked the Google spokeswoman what the ultimate goal of all this was, she replied that Google wants “to make the web better and faster for all users.” The implication is that they don’t necessarily want to do it all by themselves.

December 16, 2012 // This post is about: , , ,

Two random thoughts about London

Just got back from a few days in London and there were two random thoughts I’ve wanted to share. Neither are new, but they popped into my head during this trip and I thought, “maybe I should blog about those,” so here we are.

Thing # 1: We all know they drive on the left side of the road in the UK. This isn’t surprising anymore. What is surprising, to me at least, is every time you encounter a situation where pedestrian traffic is routed to the right. For instance, on all the escalators in the tubes it tells you to stand to the right and pass on the left. This is what we do in the US which makes it seem very wrong in the UK. Also, when you walk the streets in New York it’s a fairly standard rule that traffic stays right. In the UK I feel like you constantly see people on both sides of the sidewalk walking both directions. All of this makes me think that people naturally want to stay to the right (probably because most are right-handed). I have no idea whether this is true or not (I’m also not sure whether British folks will find this offensive, in which case I apologize). I just think you’ve got to pick one and stick to it. You wouldn’t find a random escalator or walkway in a high-traffic zone in the US where there are signs directing traffic to stay left.

Thing # 2: One of the things I really like about London is how much ground floor commercial space there is. In New York City the ground floor is almost entirely retail and office work happens somewhere between the 2nd and 100th floor. I’m not sure why I like looking in at people working, but there’s something really interesting about walking past an office window during the day. It’s just not a view you really get in New York. (I’d say this has something to do with the fact that we’re looking for a new office so I’m especially keen to see how other’s deal with their space, but this has fascinated me since well before I started a company.)

Alright, that’s it. Two very random observations.

December 15, 2012 // This post is about: , ,

The Humming Timestamp

Apparently electrical outlets give off an inaudible hum, which isn’t all that interesting in and of itself. Except that that hum changes frequency in minute ways constantly based on the supply and demand of electricity. Scientists in the UK have discovered that the hum is unique, which means it can be used to timestamp recordings. The gist:

Recordings made close to electrical power sources pick up a hum. Comparing the unique pattern of the frequencies on an audio recording with a database that has been logging these changes for 24 hours a day, 365 days a year provides a digital watermark: a date and time stamp on the recording. Philip Harrison, from JP French Associates, another forensic audio laboratory that has been logging the hum for several years, says: “Even if [the hum] is picked up at a very low level that you cannot hear, we can extract this information.”

Science is pretty crazy sometimes.

December 12, 2012 // This post is about: ,

Everything Communicates

Excuse this bit of bragging, but this makes me incredibly proud. Today, Christa Carone, CMO of Xerox, wrote this about Percolate on Forbes.com:

As an active Twitter user and scanner, I’m constantly prowling for Tweet-worthy articles and insights to share with my followers.  But, like every multi-tasker, over-committed, “not-enough-time-in-the-day” person I know, there are always competing demands for time that keep me from heeding the call of the little blue bird.

Thank goodness for Percolate, a small but fast-growing company that recognizes that marketing on the “social scale” requires content, content and more content, but only if it passes the relevancy test. Through algorithms, filters and other tools, Percolate scours the web and serves up content tailored to my specific areas of focus that I can review and easily share.

I’m grateful for and a tiny bit envious of this start-up. I marvel at how its founders quickly spotted a need and last year created a company that has scored a slew of clients and, in November, $9 million in funding. Besides that, everything this company does is on-brand, from its business cards and its Daily Brew email to the—yes–perkiness of its staffers.

One of my resolutions for 2013 is to spend more time learning from small companies like Percolate. Big organizations can be great marketers but often find it hard to act fast.  Frankly, “seize” isn’t something that is easily said or done.  But there are lessons that Goliaths like Xerox can learn from more nimble David-sized enterprises.

Beyond it being incredibly flattering, there are two things that make me especially happy reading this: First, it’s just the recognition around the brand. I spent a lot of time working with large companies with unbelievable brands and its fun to get a shot at building your own. I still believe deeply that there’s an opportunity within tech, especially on the enterprise side, to take advantage of the lack of thoughtful brands in the space. Second, and more importantly, it’s the recognition of people as part of the brand. When I worked for Naked Communications the tagline (or whatever you want to call it) was “everything communicates.” Brands aren’t built with collateral and style guides, they’re built through interactions and, for us and many other companies, those interactions are with people just as much as they’re with software. Your brand is the total of all those parts and interactions and the responsibility for it sits with everyone in the organization, whether they’re on the front lines dealing with clients or they’re just out at a bar talking about their jobs.

It makes me incredibly proud to read something like this.

December 11, 2012 // This post is about: , , , ,

Blind Spots

Bill Simmons has a good article about Bill Russell, Kobe Bryant and leadership. I found the first paragraph especially interesting:

I spent five hours with Bill Russell last week and thought of Kobe Bryant twice and only twice. One time, we were discussing a revelation from Russell’s extraordinary biography, Second Wind, that Russell scouted the Celtics after joining them in 1956. Why would you scout your own teammates? What does that even mean? Russell wanted to play to their strengths and cover their weaknesses, which you can’t do without figuring out exactly what those strengths and weaknesses were. So he studied them. He studied them during practices, shooting drills, scrimmages, even those rare moments when Red Auerbach rested him during games. He built a mental filing cabinet that stored everything they could and couldn’t do, then determined how to boost them accordingly. It was HIS job to make THEM better. That’s what he believed.

The idea of scouting your own teammates is really interesting and clearly has business implications. We tend to spend a lot of time looking at the landscape and understanding our customers, but there’s always an opportunity to better understand the people around us. We sort of do it with reviews and goals inside organizations, but this seems like a different lens on the idea of management and leadership. Instead of just trying to look at people and understand how to help them grow, you look at how competition would exploit your team and use that to try to identity your blind spots.

December 10, 2012 // This post is about: , , , , ,

The Other 1%?

Fortune posted this 1992 article about the Hells Angels. In general it’s a bit of a boring factual account of how the group came to be and how it makes money, but I really liked the fact they used to call them the one-percenters:

What makes the Angels and the lesser outlaws so distinctive among criminal enterprises — and adds to the frustration of law enforcement officials — is that many Americans celebrate them and identify with them. Back in the 1950s, the American Motorcyclist Association, the voice of legitimate riders, pronounced that ”only 1%” of all riders were troublemakers. The outlaws gleefully accepted the label, and many still call themselves one-percenters. (The actual percentage is much smaller — counting the hangers-on police call associates, only about 0.2% of the estimated nine million motorcyclists in the U.S.) And plenty of people — including many who have never even sat on a motorcycle — like their style and applaud them for defying convention and authority.

Can’t imaging they’re still calling themselves that …

December 9, 2012 // This post is about: , ,

How Superman came to wear his underwear on the outside

Not exactly something I had ever thought about, but io9 has an interesting post about how comic book characters came to have their underwear on the outside and why the industry shouldn’t bail on the innovation now. The money quote:

Underpants on tights were signifiers of extra-masculine strength and endurance in 1938. The cape, showman-like boots, belt and skintight spandex were all derived from circus outfits and helped to emphasize the performative, even freak-show-esque, aspect of Superman’s adventures. Lifting bridges, stopping trains with his bare hands, wrestling elephants: these were superstrongman feats that benefited from the carnival flair implied by skintight spandex. [Artist Joe] Shuster had dressed the first superhero as his culture’s most prominent exemplar of the strongman ideal, unwittingly setting him up as the butt of ten thousand jokes.

December 8, 2012 // This post is about: , , , ,

What Blog?

An interesting statement by Bruce Sterling on memory and blogs and media more generally:

As people get more comfortable with the metamedium of software which underlies all digital media, they get less and less concerned with whatever “new media” may call themselves. When weblogs are finally gone, people will say that there was never really such a thing as a “weblog” in the first place.

I think he’s right and we’re already seeing it. I wonder if there’s a media law somewhere in there where we create new media and then when we make them extinct we actually erase them from our collective memories.

December 6, 2012 // This post is about: , ,

NYC Christmas Trees

If you have been around NYC during December you’ve definitely seen the Christmas trees lined up in front of stores. I was walking past some this morning and wondered out loud on Twitter whether the sellers needed permits. Not surprisingly, Justin Kalifowitz knew the answer and pointed me to this New York Times story from 2003:

But Christmas tree vendors need neither permits nor First Amendment protection to spread their holiday cheer. They are entitled to what might be called the ”coniferous tree” exception, adopted by the City Council in 1938 over the veto of Mayor Fiorello H. La Guardia. The city’s administrative code allows that ”storekeepers and peddlers may sell and display coniferous trees during the month of December” on a city sidewalk without a permit, as long as they have the permission of owners fronting the sidewalk and keep a corridor open for pedestrians. (The law originally cited Christmas trees, but the religious reference was removed in 1984.)

December 3, 2012 // This post is about: , , , , ,

Stocks as a Hobby

Unfortunately he ends up going in a bit of a different direction, but I think the idea that picking stocks is a hobby is an intriguing notion from Felix Salmon:

I had a fascinating lunch, a couple of weeks ago, which lodged in my mind the idea that stock picking, at least when practiced by individuals, is best analyzed as an upper-middle-class hobby rather than as purely profit-focused investing activity. Once you start looking at it that way, suddenly a lot of behavior, which looks irrational under most lights, starts making a lot of sense.

November 29, 2012 // This post is about: ,

Why is Android different?

There’s lot of talk about the numbers from Black Friday. Twitter and Facebook didn’t fare well, mobile was up and iphone destroyed Android. Horace at Asymco asks the obvious question about the last stat: Why?

This I consider to be a paradox: Why is Android attracting late adopters (or at least late adopter behavior) when the market is still emergent? We’ve become accustomed to thinking that platforms that look similar are used in a similar fashion. But this is clearly not the case. The shopping data is only one proxy but there are others: developers and publishers have been reporting distinct differences in consumption on iOS vs. Android and, although anecdotal, the examples continue to pile up.

He’s not satisfied with the idea that android has a different demographic profile because of how many people own them now. So what could it be?

November 28, 2012 // This post is about: , ,