This hasn’t been my best year for blogging. My last post was June and, before that, January. Such is the life of an entrepreneur and new dad. However, while I haven’t found time to do the sort of writing I used to, I am happy to say I did a fair amount of reading this year and couldn’t let the holidays pass without sharing some of my favorite longform.
If you haven’t read one of these lists before (2011, 2012, and 2015), the basic gist is it’s a list of the stuff I read this year that I liked the most. Much of it is longform journalism written in 2016, though, as the internet is wont to do, there’s lots of older writing, podcasts, and who knows what else in the mix. (If you’re so inclined, I also have a Twitter account that just tweets out the articles I favorite in Instapaper.)
Without any further ado … (and in no specific order) … the list (lots more commentary below):
- The Fighter – C.J. Chivers – New York Times Magazine – December 28, 2016
- The AI Revolution: The Road to Superintelligence Part 1 & Part 2 – Tim Urban – Wait But Why – January 22, 2015
- Citizen Khan – Kathryn Schulz – New Yorker – June 6, 2016
- In the Heart of Trump Country – Larissa MacFarquhar – New Yorker – October 10, 2016
- Why the Global 1% and the Asian Middle Class Have Gained the Most from Globalization – Branko Milanovic – Harvard Business Review – May 13, 2016
- Are We There Yet? – This American Life – July 29, 2016
- Arthur Kroeber vs. The Conventional Wisdom – Kaiser Kuo and Jeremy Goldkorn – Sinica – June, 2016
- Mental Models I Find Repeatedly Useful – Gabriel Weinberg – Medium – July 5, 2016
- Learning Chess at 40 – Tom Vanderbilt – Nautilus – May 5, 2016
- Last Taboo – Wesley Morris – New York Times Magazine – October 27, 2016
- The Secret History of Tiger Woods – Wright Thompson – ESPN – April 21, 2016
- Alexander Litvinenko: the man who solved his own murder – Luke Harding – The Guardian – January 19, 2016
- The empty brain – Robert Epstein – Aeon – May 18, 2016
- The white flight of Derek Black – Eli Saslow – The Washington Post – October 15, 2016
On the cost of war
A few years ago I got the chance to spend some time with CJ Chivers, the New York Times war correspondent. His book, The Gun, had just come out and Colin, Benjamin, and I were helping to get him set up on social media. We spent the day hanging out, discussing journalism, signing up for accounts, and talking about how extraordinary war photographers are. Since then CJ has returned home and given up his role as an on-the-ground war reporter (a great longread from 2015) and his latest feature is actually in this weekend’s New York Times Magazine. The Fighter is a profile of former Marine Sam Siatta and his post-war struggles. What makes Chivers such an amazing person to cover war, beyond his ability to write and willingness to dig indefinitely for a story (he became the preeminent expert on ammunition serial numbers) is his profound respect for the military and the men and women who serve. Chivers served in the Marines in the 80s and 90s and brings that to every story he writes, but its intensified in a story about a person he clearly believes could have been nearly any Marine.
[The Fighter – C.J. Chivers – New York Times Magazine – December 28, 2016]
On artificial intelligence
The article that probably blew my mind the most was actually written in January 2015. I had heard about Wait But Why’s two–part primer on AI, but hadn’t gotten around to reading the 25,000 word tome quite yet. Once I did, I was not disappointed. I went from knowing basically nothing about artificial intelligence to being unable to carry a conversation without bringing it up. Tim Urban, the author of Wait But Why, read every book and article on the topic and ties it all together concisely (seriously) and with some excellent stick figure drawings. Warning: It’s heavy, like human extinction heavy. A snippet:
And while most scientists I’ve come across acknowledge that ASI [artificial superintelligence] would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
Both James and I liked the article so much that we asked Tim to lead off our Transition conference this year. (If you like the AI article, I’d also highly recommend his article on the Fermi Paradox and just about anything else.)
[The AI Revolution: The Road to Superintelligence Part 1 & Part 2 – Tim Urban – Wait But Why – January 22, 2015]
In a year of lots of bombast about immigrants (especially ones with the last name Khan), this incredibly well-researched profile of Zarif Khan, an Afghani who immigrated to Wyoming in the early 1900s, was an intimate profile of the immigrant story of America. The conclusion has to be one of my favorites from the year:
Over and over, we forget what being American means. The radical premise of our nation is that one people can be made from many, yet in each new generation we find reasons to limit who those ‘many’ can be—to wall off access to America, literally or figuratively. That impulse usually finds its roots in claims about who we used to be, but nativist nostalgia is a fantasy. We have always been a pluralist nation, with a past far richer and stranger than we choose to recall. Back when the streets of Sheridan were still dirt and Zarif Khan was still young, the Muslim who made his living selling Mexican food in the Wild West would put up a tamale for stakes and race local cowboys barefoot down Main Street. History does not record who won.
[Citizen Khan – Kathryn Schulz – New Yorker – June 6, 2016]
Like everyone else, I read a lot about politics this year. Most of it I would never care to subject anyone to again, but along the way there some pieces that stood out. To me, this New Yorker profile of Logan County, West Virginia was the best telling of America’s divide. It’s a story we all know at this point, but part of what makes this article work so well is it’s more than just about Donald Trump or income inequality or the rural/urban divide, it’s really the profile of a state and it’s unique culture.
Rounding out politics articles: The Case Against Democracy (New Yorker) provides context for why our system works the way it does and asks whether it could work better. This Election Was About the Issues (Slate) argues against the refrain that the election was about everything but the issues, suggesting that it was about the issues Americans actually care about:
I’m talking about issues that involve the fundamental arrangements of American life, issues of race and class and gender and sexual violence. These are the things we’ve argued about in the past year and change, sometimes coarsely, sometimes tediously, but very often illuminatingly. This has been, by all but the most fatuous measures, an issue-rich campaign.
Ezra Klein’s amazing profile of Hillary Clinton, Understanding Hillary (Vox), argued that the things that make her a great governor are the same things that make her a bad politician and gave me hope.
It turned out that Clinton, in her travels, stuffed notes from her conversations and her reading into suitcases, and every few months she dumped the stray paper on the floor of her Senate office and picked through it with her staff. The card tables were for categorization: scraps of paper related to the environment went here, crumpled clippings related to military families there. These notes, Rubiner recalls, really did lead to legislation. Clinton took seriously the things she was told, the things she read, the things she saw. She made her team follow up.
And, of course, the “Goodbye Obama” pieces: David Remnick’s Obama Reckons with a Trump Presidency (New Yorker) and Barack Obama and Doris Kearns Goodwin: The Ultimate Exit Interview (Vanity Fair).
[In the Heart of Trump Country – Larissa MacFarquhar – New Yorker – October 10, 2016]
On income inequality
In that David Remnick profile of Obama I just mentioned is probably the best single quote I read this year about income inequality, one of the defining issues of 2016:
“The prescription that some offer, which is stop trade, reduce global integration, I don’t think is going to work,” he went on. “If that’s not going to work, then we’re going to have to redesign the social compact in some fairly fundamental ways over the next twenty years. And I know how to build a bridge to that new social compact. It begins with all the things we’ve talked about in the past—early-childhood education, continuous learning, job training, a basic social safety net, expanding the earned-income tax credit, investments in infrastructure—which, by definition, aren’t shipped overseas. All of those things accelerate growth, give you more of a runway. But at some point, when the problem is not just Uber but driverless Uber, when radiologists are losing their jobs to A.I., then we’re going to have to figure out how do we maintain a cohesive society and a cohesive democracy in which productivity and wealth generation are not automatically linked to how many hours you put in, where the links between production and distribution are broken, in some sense. Because I can sit in my office, do a bunch of stuff, send it out over the Internet, and suddenly I just made a couple of million bucks, and the person who’s looking after my kid while I’m doing that has no leverage to get paid more than ten bucks an hour.”
With that said, my pick comes from economist Branko Milanovic, who wrote an article for Harvard Business Review titled Why the Global 1% and the Asian Middle Class Have Gained the Most from Globalization. Though the data has been questioned the conclusion of the article hasn’t: Globalization has spread wealth around the world in some incredible ways … and it has happened, at least to some extent, at the expense of the Western middle class.
[Why the Global 1% and the Asian Middle Class Have Gained the Most from Globalization – Branko Milanovic – Harvard Business Review – May 13, 2016]
On the rest of the world (and podcasts)
It just so happens that my two favorite podcast episodes this year were on foreign affairs. The first comes from the always amazing This American Life who spent time in refugee camps in Greece speaking to people about their lives. As always, This American Life gives the most accurate macro view by focusing on the micro. The second comes from a show I’d never heard of before on China called Sinica. In the episode they talk to Arthur Kroeber, author of China’s Economy: What Everyone Needs to Know, who basically argues that China is actually following America’s growth playbook (called the American System), which included lots of state-led development, high tariffs, and even tons of intellectual property theft (from Europe at that time). Basically he argues we should stop being so surprised by what’s happening there.
Beyond those two, I listened to a lot of Marc Maron’s WTF (always skip the first 10 minutes) and really enjoyed his interview with Louis Anderson, who I didn’t realize was a serious standup. (Part of why I really enjoy WTF is that it’s effectively a show about the creative process. When he goes deep with someone on how they do their craft I find it endless fascinating. While the Louis episode isn’t exactly that, it’s also just loads of fun to listen to anyone serious about anything talk to someone they so clearly respect.) Gladwell’s Revisionist History was pretty good (though sometimes a bit preachy). His episode on Generous Orthodoxy was just a very well told story (and when you’re done, go read the letter the show was based on).
[Are We There Yet? – This American Life – July 29, 2016] [Arthur Kroeber vs. The Conventional Wisdom – Kaiser Kuo and Jeremy Goldkorn – Sinica – June, 2016]
As you may or may not know, I became a parent in 2015. Since my daughter was born I’ve been keeping a collection of parenting articles that don’t suck (a surprisingly hard thing to find actually). My favorite of 2016 was probably Tom Vanderbilt’s piece on learning chess with his daughter. It’s both a well-told story and some really good lessons on the differences in learning between adults and children. A snippet:
Here was my opening. I would counter her fluidity with my storehouses of crystallized intelligence. I was probably never going to be as speedily instinctual as she was. But I could, I thought, go deeper. I could get strategic. I began to watch Daniel King’s analysis of top-level matches on YouTube. She would sometimes wander in and try to follow along, but I noticed she would quickly get bored or lost (and, admittedly, I sometimes did as well) as he explained how some obscure variation had “put more tension in the position” or “contributed to an imbalance on the queen-side.” And I could simply put in more effort. My daughter was no more a young chess prodigy than I was a middle-aged one; if there was any inherited genius here, after all, it was partially inherited from me. Sheer effort would tilt the scales.
[Learning Chess at 40 – Tom Vanderbilt – Nautilus – May 5, 2016]
On mental models
While not a longread in quite the way the others are, the piece that has probably dug its way deepest into my brain is this list of mental models from Gabriel Weinberg, Founder & CEO of the search engine DuckDuckGo. He was inspired to write his mental models down because of something Charlie Munger, Warren Buffet’s business partner, said about them: “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.” I’ve been pretty obsessed with this idea myself because I think we (as in people who talk about business) often over-emphasize case studies and specific stories, while under-emphasizing the model that can help someone make a decision that can lead to a similar outcome. I’ve been keeping my own list of models since I read this and might share them some time down the road.
[Mental Models I Find Repeatedly Useful – Gabriel Weinberg – Medium – July 5, 2016]
What might be the best essay of the year comes from New York Times culture critic Wesley Morris and explores what he calls “the last taboo”: Black penises in popular culture. Part of what makes for great cultural criticism is exposing you to something that you hadn’t noticed before but can’t ever not notice again, and Morris does just that. Race was obviously a big issue in 2016 and the article explores just one of the many ways racism roots in popular culture and perpetuates itself.
[Last Taboo – Wesley Morris – New York Times Magazine – October 27, 2016]
Most of the year-end lists I looked at included ESPN’s Tiger Woods profile as their top sports story of the year and it’s pretty hard to deny it. It’s engaging and breaks one of the crazier stories of the year: That Tiger Woods undoing may have been, at least in part, a result of his obsession with the Navy SEALs.
While the Tiger story is the flashiest and probably my favorite, looking back at my list of favorites there are actually a nice collection from a wide variety of sports. Nick Paumgarten’s delightful profile of 14-year-old climbing sensation Ashima Shiraishi made me want to get my 1.5-year-old into the climbing gym. The New York Times profile of Yannis Pitsiladis, a scientist trying to break the puzzle of the two hour marathon, was probably the sports story I talked about the most. Though not strictly a sports story, Deadspin’s profile of the meteoric rise and fall of sportswriter Jennifer Frey was gripping and sad. Finally, though most definitely not from this year, I went back and read John McPhee’s 1965 profile of Princeton basketball sensation Bill Bradley.
[The Secret History of Tiger Woods – Wright Thompson – ESPN – April 21, 2016]
Luckily for everyone that writes true crime, David Grann is working on a book, which means he didn’t submit any competition this year. Easily my favorite this year was California Sunday’s article about “Somerton Man”, a nearly seventy year old mystery about a man who washed up dead on the beach in Australia with nothing to identify him but a bit of a poem. Unfortunately this was from last year and I just didn’t find it until January, so I’ll reserve the spot for something actually written in 2016. Also missing out by a year (though I just discovered it) was this excellent story from the New Yorker about what actually happens when pirates take your ship.
That then leaves two crime(ish) articles to chose from: The excellent Guardian piece about the poisoning of Russian enemy of the state Alexander Litvinenko and New Republic’s piece on a mystery man discovered in Georgia. Considering the role of Russia on the world stage in 2016 and the level of reporting in the piece, I’ve got to give the nod to the Guardian on this one.
[Alexander Litvinenko: the man who solved his own murder – Luke Harding – The Guardian – January 19, 2016]
On the way we experience the world
One way I judge writing is to see how it lodges itself in my brain. I know something was particularly good when I find myself thinking and talking about it for weeks and months afterwords. Sometimes the best writing doesn’t hit you right away, it takes some time to percolate. This Aeon piece on how our brains process information happens to be one of those. It argues that our theory that the brain operates like a computer has led us down a path of research that has set back our understanding of the brain. We’ve got a long history of understanding our brains through the lens of the latest tech it turns out:
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
Speaking of brains, Blake Ross’s very personal essay of how he came to realize he has aphantasia, or no mind’s eye, is an exercise in trying to imagine the unimaginable (for most of us). Ross doesn’t picture things when he thinks about them and didn’t realize the rest of the world did until quite recently.
[The empty brain – Robert Epstein – Aeon – May 18, 2016]
I’ve got a few favorites left open in tabs that I figure I’ll chose one from. These didn’t quite fit into the previous categories and I’ll try to get through them quickly(ish).
The white flight of Derek Black is the amazing story of the son of a prominent white nationalist who found his views melting away as he exposed himself to the outside world of diversity.
State of the Species is Charles C. Mann’s 2013 essay on human plasticity and the possibility it holds to help us solve the world’s problems. (Charles gave a version of the essay in presentation form at Transition this year.)
Politico’s We’re the Only Plane in the Sky is an oral history of September 11th on Air Force One.
Finally, to end with a bit of inspiration, this Chuck Close profile from the Times Magazine included this amazing bit:
Three weeks earlier, Simon had released a new album, “Stranger to Stranger,” with its cover taken from a portrait that Close painted of the musician a few years back. Then, the day before I saw Close, Simon announced that the album would be his last. “I called him up, and I said, ‘Artists don’t retire,’ ” Close told me. “I think I talked him out of it. I said: ‘Don’t deny yourself this late stage, because the late stage can be very interesting. You know everybody hated late de Kooning, but it turned out to be great stuff. Late Picasso, nobody liked it, and it turned out to be great.’ ” Close reminded Simon that Matisse was unable to continue painting late in life. “Had Matisse not done the cutouts, we would not know who he was,” Close said. “Paul said, ‘I don’t have any ideas.’ I said: ‘Well, of course you don’t have any ideas. Sitting around waiting for an idea is the worst thing you can do. All ideas come out of the work itself.’ ”
He pointed out that Simon is 74, the same age he was early last summer. “I told him, ‘When you get to be my age, you’ll see,’ ” he said with a laugh.
[The white flight of Derek Black – Eli Saslow – The Washington Post – October 15, 2016]
As everyone now knows the UK voted to leave the European Union today. I happen to be in London this week and so have been paying close attention to the vote and having many conversations with family, friends, and colleagues about how it came to this and what it means for the future. I’m no economist or pundit, so I’ll leave those takes to the professionals, but I wanted to take a minute to share a few thoughts on the obvious parallels between what’s happening here in the UK and with Trump in the US.
The two most enlightening things I’ve read about Trump happen to come from FiveThirtyEight. The first, published at the end of April, spells out the difference between what we think of as “normal America” and what the reality actually is:
We all, of course, have our own notions of what real America looks like. Those notions might be based on our own nostalgia or our hopes for the future. If your image of the real America is a small town, you might be thinking of an America that no longer exists. I used the same method to measure which places in America today are most similar demographically to America in 1950, when the country was much whiter, younger and less-educated than today. Of course, nearly every place in the U.S. today looks more like 2014 America than 1950 America. But the large metros that today come closest to looking like 1950 America are Lancaster, Pennsylvania; Ogden and Provo, in Utah; and several in the Midwest and South.
Normal America, the article explains, is actually best represented (by similarity to American population across “age, educational attainment, and race and ethnicity”) in cities like New Haven, Connecticut or Tampa, Florida. That means when people say that politicians or elites are out of touch with normal America, it may be true, but that’s not because normal America is still small-town America. We are a more diverse, older, and more educated country than we were 50 years ago.
The second bit, also from FiveThirtyEight, is about why Hispanics and other minority groups are more optimistic about America than average Americans:
But for many non-whites, the pattern [not very concerned about the present, pessimistic about the future] is the opposite: They are concerned about the present but optimistic about the future. In the Pew poll, Hispanics were sober about their immediate financial circumstances — 40 percent said their finances were in good shape, compared with 43 percent for the public at large — but they see brighter days ahead. More than 70 percent expect their children to be better off than they are. Previous polls have found similar results for other minority groups: According to 2014 data from the General Social Survey, three-quarters of blacks and Hispanics expect their children to enjoy a higher standard of living than they do, compared to just half of whites. A poll commissioned by The Atlantic last fall found that blacks, Hispanics and Asians were far more likely than whites to report that “the American Dream is alive and well.”
Put those things together and what you get is clear: “Make America Great Again” actually means make America look more like it did in 1960. The problem, of course, is that America was a pretty bad place for a lot of Americans at that point (women, minorities, and LGBT to name a few). But most people don’t remember that, because nostalgia is broken and doesn’t work that way. From a 2013 New York Times article on nostalgia:
Happy memories also need to be put in context. I have interviewed many white people who have fond memories of their lives in the 1950s and early 1960s. The ones who never cross-examined those memories to get at the complexities were the ones most hostile to the civil rights and the women’s movements, which they saw as destroying the harmonious world they remembered.
But others could see that their own good experiences were in some ways dependent on unjust social arrangements, or on bad experiences for others. Some white people recognized that their happy memories of childhood included a black housekeeper who was always available to them because she couldn’t be available to her children.
Put it all together and you have a confluence of circumstances that tells a pretty good story for how both the US and the UK have gotten to now and what it really means to make a country great again. Of course, like others, I don’t have answers of how to combat this, but understanding what we’re up against is the first step.
I believe this marks two weeks of blog posts for me, which is a pretty major milestone. In celebration I’m taking the day off and instead sharing our new product video from Percolate. I’ve spent the last four years working with an incredible group of folks building out something that I’m very proud. This video does a really nice job not just showing that off, but also speaking to the Percolate brand.
Without any further ado …
I love this short New Yorker video about Greg Packer, a man who really loves to see his name in print.
Yesterday James, my co-founder at Percolate, sent me over a really interesting nugget about how Apple structures its company about 35 minutes into this Critical Path podcast. Essentially Horace (from Asymco) argues that Apple’s non-cross-functional structure actually allows it to innovate and execute far better than a company structured in a more traditional, non-functional, way. As opposed to most other companies where managers are encourages to pick up experience across the enterprise, Apple encourages (or forces), people to stay in their role for the entirety of their career. On top of that, roles are not horizontal by product (head of iPhone) and instead are vertical by discipline (design, operations, technologies) and also quite siloed. He goes on to say that the only parallel he could think of is the military, who basically operates that way. (I know I haven’t done the best job articulating it, that’s because as I listen again I don’t necessarily think the thesis is articulated all that well.)
Below is my response back to James:
While I totally agree with what he says about the structure (that they’re organized functionally and it works for them), I’m not sure you can just conclude that’s ideal or drives innovation. The requirement of an org structure like that is that all vision/innovation comes from the top and moves down through the organization. That’s fine when you have someone like Jobs in charge, but it’s questionable what happens when he leaves (or when this first generation he brought up leaves maybe). Look at what happened when Jobs left the first time as evidence for how they lost their way. Apple is a fairly unique org in that it has a very limited number of SKUs and, from everything we’ve heard, Jobs was the person driving most/all.
My question back to Horace would be what will Apple look like in 20 years. IBM and GE are 3x older than Apple is and part of how they’ve survived, I’d say, is that they’ve built the responsibility of innovation into a bit more of a cross-functional discipline + centralized R&D. I don’t know if it matters, but if I was making a 50 year bet on a company I’d pick GE over Apple and part of it is that org structure and its ability to retain knowledge.
Military is actually a perfect example: Look at the struggles they’ve had over the last 20 years as the enemy stopped being similarly structured organizations and moved to being loosely connected networks. History has shown us over and over centralized organizations struggle with decentralized enemies. Now the good news for Apple is that everyone else is pretty much playing the same highly organized and very predictable game (with the exception of Google, who is in a functionally different business and Samsung, who because of their manufacturing resources and Asian heritage exist in a little bit of a different world).
Again, in a 10 year race Apple wins with a structure like this. But in a 50 year race, in which your visionary leader is unlikely to still be manning the helm, I think it brings up a whole lot of questions.
In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:
It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”
All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.
So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.
What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.
I’m sure you’ve all seen this quote. It’s attributed to Robert Stephens, founder of Geek Squad, and goes something like: “Advertising is the tax you pay for being unremarkable.” (I was reminded of it most recently reading Josh Porter’s blog, Bokardo.) It sounds good and, at first blush, correct, but it’s not for lots of reasons.
Broadly, the line between advertising, marketing, branding, and communications has always been a blurry one. Depending on who you talk to they have a very different definition. For the purposes of the quote, let’s assume when Stephens was talking about advertising he was specifically referring to the buying of media space across platforms like television, magazines, and websites.
With that as the working definition, there are lots of complicated reasons big companies advertise their products. Here are a few:
- Distributors love advertising: If you’re a CPG company you advertise as much for the supermarkets as your do for your product. The more money you spend the better spot they’re willing to give you on the shelf (the thought being that people will be looking for your product). I don’t think there is anyone out there that would argue shelf placement doesn’t matter. At the end of the day supermarkets are your customer if you’re a CPG company, so keeping them happy is a pretty high-priority job.
- Advertising is good at making people think you’re bigger than you are: Sometimes a company or brands wants to “play above its weight,” making people think they’re bigger than they’re actually are. When we see something on TV or in print, we mostly assume there is a big corporation behind it. Sometimes that’s more important than actually selling the product.
- Sometimes you’re not selling a product at all: There are many companies who advertise for reasons wholly disconnected from their product. GE, for example, isn’t running TV commercials about wind turbines to solely try to communicate with the thousands of people who are potentially in the market for a multi-million dollar purchase. A part of why they do it is to communicate with the public at large who is both a major shareholder for the company and also the end consumer of many of their products (many planes we fly on run GE engines and our electricity probably wouldn’t reach our house without GE products). How remarkable their products are has no bearing in this case, since we would never actually be in the market for the vast majority of the things they produce.
Broadly, though, the point I’m trying to make is that while many write off advertising as having no purpose (or being “a tax”), it’s just not true. What’s more, as advertising becomes a more seamless part of the process of being a brand in social, I think this will only become more true. If you see a piece of content performing well on Twitter or Facebook why would you not pay to promote that content and see it reach an audience beyond the core? At that point you’ve eliminated the biggest challenge traditionally associated with advertising (spending tons of money to produce something and having no idea whether it will actually have an effect on people). Seems to me if you’re not willing to entertain the idea you’re just standing on principle.
In what I assume is a response to this article that was floating around about placebo buttons (buttons that are there to make you feel better, but don’t do anything), William Gibson tweeted this:
I love the internet. That’s all.
This morning I woke up to this Tweet from my friend Nick:
It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.
Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.
Second issue came in a paragraph about the scalability of content for brands:
Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.
Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.
Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.
Coming back from the Brooklyn Home Depot today I went to look up the word collision. My mom, who I was in the car with, mentioned it looked funny spelled (correctly) on a sign and we were checking that it was actually “LL” and not “SS”. I Googled it and found out it was correct, but it was the second result that caught my eye for the 1960 New York mid-air collision. I had never heard of it and neither had my dad, who grew up in the city (I’m assuming it only turned up because I was driving through Park Slope at the time).
Anyway, turns out in 1960 two planes collided over Staten Island and one, the larger of the two, was able to continue flying until finally crashing down in Park Slop about 6 blocks from where I live. Scouting New York has an excellent account and follow up with comments by folks who remember the accident that had one survivor: An 11-year-old boy who died the following day.
The Times has an excellent little video with stills and voiceover from the reports of the day. All around a crazy scene.
After it started raining I decided to redesign my blog. There wasn’t much reason other than looking for a fun project to work on and finding the old version increasingly tough on the eyes (plus terrible on the phone). The new version is simpler, responsive for mobile, and has bigger fonts (for whatever that’s worth).
I’d like to say this means I’m going to write more, but that doesn’t seem all that likely. I mean I’ll do my best (and have the last few days), but it’s amazing how often life gets in the way of blogging. One of the amazing things about RSS feeds and email subscriptions, though, is that it doesn’t really matter how frequently I actually update this thing because you’ll hear about it. For what it’s worth I’ve also got a Twitter feed for new posts from the blog at @NoahBrier.
One of the things I’ve been thinking about lately is that it feels like there’s a big opportunity for blogs again. While everything has gotten shorter, it’s left a pretty wide door open for folks who want to write thoughtful stuff. I think it’s why we’ve seen thoughtful bloggers ascend quickly (someone like Horace at Asymco comes to mind). Again, not sure that means I’ll write more, but it certainly feels like a good time to be doing so.
When I first learned to write code a few years ago I taught myself PHP. I still contend that was/is the very best choice for someone just starting out as it offers the lowest barrier to entry in making things happen on the web. Between WAMP/MAMP and the fact that most vanilla webhosts support PHP by default, it gives someone just coming into building applications a very simple tool to get started with.
This answer is not the most popular with engineers, who (sometimes fairly) see weaknesses and sloppiness in PHP. The counterpoint I offer is that I’m not suggested it’s a good language, but rather someone who is just getting started needs as few barriers as possible to getting something up and running. PHP I still contend, is the best tool for that job.
The problem was, all that work was leading me to shy away from writing code when I had an idea. Instead of spending time writing code I knew I’d spend it setting up servers and the such. I tried AWS and even Heroku, and both still left me with what felt like an imbalance between setup and coding. What I started to realize is that as someone who doesn’t write code every day, I want tools that optimize the amount of time I actually spend writing code. That, after all, is what I enjoy. (I’m not sure if I’ve ever written it here, but the feeling I get writing code is pretty unique to any of the other work I’ve done. There’s a beauty in the simplicity of code: While something can always be more optimized or elegant, at the end of the day when it works, it works, and when it doesn’t, it tells you why.)
Anyway, though I can’t remember how I got started, I discovered Google AppEngine about six months ago and it’s been a total revelation for me. All of a sudden I’m excited to take idea to Sublime and get busy because I know that I’ll waste 0 time doing anything I don’t want. Google handles data storage, queuing, routing, and pretty much anything else I ever need and while there are certainly limitations (mostly around package management), the pros outweigh the cons by a huge amount.
About two months ago I thought it might be fun to try teaching an introduction to Python class using AppEngine. It would give me a chance to continue to test my theory that the best way to teach people to write code was to start them with GET/POST and, thanks to AppEngine, getting started and getting deployed would be as easy as clicking the buttons in their little OS X app. I made a little repository that I shared with the Percolators who took that class a few months and I thought it might be worth sharing that with everyone else. It’s nothing fancy, but it’s got the basics of GET, POST, URL routing, and using the data store. Ideally it’s a nice little intro to writing code on the web. So, if you’re new to AppEngine, Python, or code in general, here’s how to get started:
- Download AppEngine for Python
- Download my intro files from Github
- Open AppEngine locally and File > Add Existing Application, then Browse and add the folder you just downloaded.
- Hit Run in AppEngine and then Browse, which will open your site (running on your local server) in your browser.
- From there open up the files in your favorite editor (I prefer Sublime) and start playing around. Don’t worry, you can’t really break anything and, when you do, Python will tell you exactly what you did wrong (to the line of code).
That’s it. Good luck, enjoy, and let me know how it goes.
Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.
As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:
Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.
Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.
To be clear, I’m not saying this is a good or a bad thing (I do my best to stay away from that sort of stuff), but it’s definitely a real thing and an integral part of how we all experience culture today. When I opened the newspaper this morning it was as much to see how much I knew and how closely I’d followed as it was to learn something new about the chase. After reading the cover story that recounted the previous day’s events I turned to Brian Stetler’s appropriately titled News Media and Social Media Become Part of a Real-Time Manhunt Drama.
I’ve written in the past about how a big part of what separated McLuhan from the rest of the pack was his ability to separate his morals from his observations. Well, I particularly liked this explanation of McLuhan’s approach from the introduction to the newest edition of The Gutenberg Galaxy: “We have to remember that Marshall McLuhan portrayed himself as an explorer and not as an explainer of media environments.”
I’m guessing you heard about this, but earlier this year the University of California introduced a new identity system. It looked something like this:
As people are wont to do, they freaked out. In fact, they freaked out enough that the University eventually decided to drop the new logo. Now, with the controversy in the rearview mirror, I’ve read/listened to a few post-mortems on how and why something like this happened and I felt like chiming in. My credentials, like most commenters, are pretty thin, but I think they give me an interesting perspective. Beyond spending a sort of ridiculous amount of time thinking about brands, overseeing a product team including three designers and previously working in advertising overseeing creative teams for some time, I also built Brand Tags, the largest free database of perceptions about brands. I am, however, not a designer.
That last bit, especially, shapes my perception on conversations about design.
Okay, with disclosures behind us, a bit more background: When this new logo was introduced to the public (though apparently it had been on a roadshow for some time before it showed up on the web), it was misinterpreted as a replacement for the official seal of the University of California system. That seal looks like this:
This, apparently, was inaccurate. The new logo would not be replacing the seal, but rather helping to unify the various logos that had popped up across the different UC schools (the script Cal and UCLA logos are two examples). As occasionally happens the digerati spread an idea that wasn’t true. I know this isn’t shocking, but to be fair to all the bloggers on this one, the University hardly helped its case when it produced this video as a companion piece to explain the new identity:
I know this all seems like a slightly exhaustive bit of background, especially if you’ve been following this story, but I think it’s all important. In a long piece on RockPaperInk which spurred this piece, Christopher Simmons, a designer and former AIGA president, writes:
“Designers too often judge logos separate from their system…without understanding that one can’t function without the other,” criticized Paula Scher when I asked her views on the controversy, “It’s the kit of parts that creates a contemporary visual language and makes an identity recognizable, not just the logo. But often the debate centers on whether or not someone likes the form of the logo, or whether the kerning is right.” While acknowledging that all details are important, Scher also calls these quibbles “silly.” “No designer on the outside of the organization at hand is really qualified to render an informed opinion about a massive identity system until it’s been around and in practice for about a year,” she explains, “One has to observe it functioning in every form of media to determine the entire effect. This [was] especially true in the UC case.”
Which I mostly agree with. Logos don’t exist outside the system (for the most part) and, even more importantly, they don’t exist outside the collective consciousness they grow up in. This is something I got in quite a few arguments about while I was running Brand Tags. I would get an email from a company no one had ever heard of asking for me to post their logo, to which I invariably responded “no”. My reasoning, as I explained at the time, was that the point of the site was to measure brand perception and for people to have a perception, you need a brand, which you don’t have if no one knows who you are. Brands, as I’ve expressed in the past, live in people’s heads. They are the sum total of perceptions about them.
This is part of what makes it so tough to judge any sort of logo: Lack of context. Even if you see the way the system works, you don’t have the rest of the context that would come with experiencing it in the wild. If you’re a high school senior and the new UC logo is on a sweatshirt worn by the girl you had a crush on that’s home for her freshman Christmas break it’s going to have a very different meaning than if you’re first encounter is in the US News & World Reports list of top US universities. Context shapes experience and we can’t forget that.
Which makes something Simmons writes later so confusing for me:
Design as a discipline is challenged by this notion of democracy, particularly in a viral age. We have become a culture mistrustful of expertise—in particular creative expertise. I share [UC Creative Director] Correa’s fear that this cultural position stifles design as designers increasingly lose ownership of the discourse. “If deep knowledge in these fields is weighed against the “likes” and “tastes” of the populace at large,” she warns, “We will create a climate that does not encourage visual or aesthetic exploration, play or inventiveness, since the new is often soundly refused.”
Most of the article, actually, is blaming the public (and designers specifically) for the way they misinterpreted and criticized the logo. That truth, however, is at least in part due to the context they experienced the logo in. It’s near impossible, for instance, to not walk away from that introductory video believing that the logo is replacing the seal and that was produced by the University itself. Design, I’d posit, is about far more than the logo or even the system, it’s the story that exists around the brand as a whole and the designer is, at least in part, responsible for how that story is told. I agree with part of what’s written above: Design is a tough discipline because everyone has an opinion. But that’s not really new and it’s been lamented to death. People know what fonts are and many have heard of kerning or played with Photoshop. This is just the reality we live in. We can choose to ignore that reality and think we can put things out in the world without hearing from many people who are “unqualified” to have opinions or we can acknowledge that and try to spend as much time thinking about the context people are first experiencing new identities as we spend on the identities themselves. It’s not a simple solution, but it’s a whole lot more sustainable.
Finally, we need to recognize that with this new world we all live in, where everyone has an opinion about everything (let’s not pretend that design is the only victim to this reality), that its going to be harder than ever to stand behind convictions. On the one hand this can mean “a climate that does not encourage visual or aesthetic exploration, play or inventiveness,” as the UC Creative Director says, or it can mean that we need to do more to educate everyone involved in the decision-making process of what’s to come. We need to help them understand the design process, the effect of context and the potential for backlash (with our plan on how to deal with it).
Or we can do boring stuff.
Though I didn’t quote it anywhere here, a lot of my thinking in this piece was shaped by the very even coverage on this issue from 99% Invisible, which I would highly recommend listening to.
Last year I listed out my five favorite pieces of longform writing and it seemed to go over pretty well, so I figured I’d do the same again this year. It was harder to compile the list this year, as my reading took me outside just Instapaper (especially to the fantastic Longform app for iPad), but I’ve done my best to pull these together based on what I most enjoyed/found most interesting/struck me the most.
One additional note before I start my list: To make this process slightly more simple next year I’ve decided to start a Twitter feed that pulls from my Instapaper and Readability favorites. You can find it at @HeyItsInstafavs. Okay, onto the list.
- The Yankee Comandante (New Yorker): Last year David Grann took my top spot with A Murder Foretold and this year he again takes it with an incredible piece on William Morgan, an American soldier in the Cuban revolution. The article was impressive enough that George Clooney bought up the rights and is apparently planning to direct a film about the story. The thing about David Grann is that beyond being an incredible reporter and storyteller, he’s also just an amazing writer. I’m not really a reader who sits there and examines sentences, I read for story and ideas. But a few sentences, and even paragraphs, in this piece made me take notice. While we’re on David Grann, I also read his excellent book of essays this year (most of which come from the New Yorker), The Devil & Sherlock Holmes. He is, without a doubt, my favorite non-fiction writer working right now.
- Raise the Crime Rate (n+1): This article couldn’t be more different than the first. Rather than narrative non-fiction, this is an interesting, and well-presented, arguments towards abolishing the prison system. The basic thesis of the piece is that we’ve made a terrible ethical decision in the US to offload crime from our cities to our prisions, where we let people get raped and stabbed with little-to-no recourse. The solution presented is to abolish the prison system (while also increasing capital punishment). Rare is an article that you don’t necessarily agree with, but walk away talking and thinking about. That’s why this piece made my list. I read it again last week and still don’t know where I stand, but I know it’s worthy of reading and thinking about. (While I was trying to get through my Instapaper backlog I also came across this Atul Gawande piece from 2009 on solitary confinement and its effects on humans.)
- Open Your Mouth & You’re Dead (Outside): A look at the totally insane “sport” of freediving, where athletes swim hundreds of feet underwater on a single breath (and often come back to the surface passed out). This is scary and crazy and exciting and that’s reason enough to read something, right?
- Jerry Seinfeld Intends to Die Standing Up (New York Times): I’ve been meaning to write about this but haven’t had a chance yet. Last year HBO had this amazing special called Talking Funny in which Ricky Gervais, Chris Rock, Louis CK and Jerry Seinfeld sit around and chat about what it’s like to be the four funniest men in the world. The format was amazing: Take the four people who are at the top of their profession and see what happens. But what was especially interesting, to me at least, was the deference the other three showed to Seinfeld. I knew he was accomplished, but I didn’t know that he commanded the sort of respect amongst his peers that he does. Well, this Times article expands on that special and explains what makes Seinfeld such a unique comedian and such a careful crafter of jokes. (For more Seinfeld stuff make sure to check out his new online video series, Comedians in Cars Getting Coffee, which is just that.)
- The Malice at the Palace (Grantland): I would say as a publication Grantland outperformed just about every other site on the web this year and so this pick is part acknowledgement of that and part praise for a pretty amazing piece of reporting (I guess you could call an oral history that, right?). Anyway, this particular oral history is about the giant fight that broke out in Detroit at a Pacers v. Pistons game that spilled into a fight between the Pistons and the Detroit fans. It was an ugly mark for basketball and an incredibly memorable (and insane) TV event. As a sort of aside on this, I’ve been casually reading Bill Simmons’ Book of Basketball and in it he obviously talks about this game/fight. In fact, he calls it one of his six biggest TV moments, which he judges using the following criteria: “How you know an event qualifies: Will you always remember where you watched it? (Check.) Did you know history was being made? (Check.) Would you have fought anyone who tried to change the channel? (Check.) Did your head start to ache after a while? (Check.) Did your stomach feel funny? (Check.) Did you end up watching about four hours too long? (Check.) Were there a few ‘can you believe this’–type phone calls along the way? (Check.) Did you say ‘I can’t believe this’ at least fifty times?” I agree with that.
And, like last year, there are a few that were great but didn’t make the cut. Here’s two more:
- Snow Fall (New York Times): Everyone is going crazy about this because of the crazy multimedia experience that went along with it, but I actually bought the Kindle single and read it in plain old black and white and it was still pretty amazing. Also, John Branch deserves to be on this list because he wrote something that would have made my list last year had it not come out in December: Punched Out is the amazing and sad story of Derek Boogaard and what it’s like to be a hockey enforcer.
- Marathon Man (New Yorker): A very odd, but intriguing, “expose” on a dentist who liked to chat at marathons.
That’s it. I’ve made a Readlist with these seven selections which makes it easy to send them all to your Kindle or Readability. Good reading.
Before I left for my trip to Asia I went to see Zero Dark Thirty, the movie about the hunt for, and ultimately killing of, Osama Bin Laden. Before, and after, seeing it I had read quite a bit about the raid, the movie and the controversy around both. I thought maybe it would be worth collecting all this stuff into a post, so that’s what I’m doing.
First, on the movie itself. A lot of people really like it (the most interesting point Denby makes in this podcast is the idea that this and Lincoln spell the end of auteur theory as they show the power of the writer/director combo). I thought it was pretty okay. In reading around, I think Roger Ebert sums up my opinions best in his review of the film:
My guess is that much of the fascination with this film is inspired by the unveiling of facts, unclearly seen. There isn’t a whole lot of plot — basically, just that Maya thinks she is right, and she is. The back story is that Bigelow has become a modern-day directorial heroine, which may be why this film is winning even more praise than her masterful Oscar-winner “The Hurt Locker.” That was a film firmly founded on plot, character and actors whose personalities and motivations became well-known to the audience. Its performances are razor-sharp and detailed, the acting restrained, the timing perfect.
In comparison, “Zero Dark Thirty” is a slam-bang action picture, depending on Maya’s inspiration. One problem may be that Maya turns out to be correct, with a long, steady build-up depriving the climax of much of its impact and providing mostly irony. Do we want to know more about Osama bin Laden and al Qaida and the history and political grievances behind them? Yes, but that’s not how things turned out. Sorry, but there you have it.
One thing that I found particularly interesting in the film was the very short sequence on the doctor who had gone around Abbottabad under the cover of vaccination who was actually collecting DNA. I remembered reading about him in the original New Yorker account of the raid and thought that had made clear he had been successful in collected DNA evidence (it turns out the article says he wasn’t, the same way it’s presented in the film). January’s GQ has a longer account of what happened to the doctor who helped the CIA and tries to get at whether he was successful in his mission. (The answers: He was tortured/imprisioned by the Pakistani government for assisting the Americans and, as to whether he got evidence, it’s still unclear.)
If you’re interested in more reading on the subject, No Easy Day, an account by a Navy Seal on the mission is a fast and interesting read. And although I haven’t read it, my friend Colin Nagy highly recommends The Triple Agent, which covers what happened at Khost, where a Jordanian triple agent beat CIA intelligence and security to bomb a military base and kill a sizable group of CIA operatives (there’s a scene in Zero Dark Thirty about it, though the film offers no real depth on what happened).
My sister sent me this link to the ten best Muppet Christmas moments and it was conspicuously missing my all-time favorite Muppet moment from A Muppet Family Christmas. All the Muppets turn up at Fozzie’s mom’s house for Christmas even though Doc (from Fraggle Rock) was renting it as a quiet escape. As Bert and Ernie come in this conversation happens between the three of them:
Ernie: Oh, hi there, we’re Ernie and Bert.
Doc: Well, hi there yourself, I’m Doc.
Bert: Oh, did you know that Doc starts with the letter D?
Doc: Why, yes.
Ernie: Yes! Yes, starts with the letter Y.
Ernie: And true starts with the letter T.
Doc: What is this?
Bert: Where we come from this is small talk.
That line gets me every time.
Just got back from a few days in London and there were two random thoughts I’ve wanted to share. Neither are new, but they popped into my head during this trip and I thought, “maybe I should blog about those,” so here we are.
Thing # 1: We all know they drive on the left side of the road in the UK. This isn’t surprising anymore. What is surprising, to me at least, is every time you encounter a situation where pedestrian traffic is routed to the right. For instance, on all the escalators in the tubes it tells you to stand to the right and pass on the left. This is what we do in the US which makes it seem very wrong in the UK. Also, when you walk the streets in New York it’s a fairly standard rule that traffic stays right. In the UK I feel like you constantly see people on both sides of the sidewalk walking both directions. All of this makes me think that people naturally want to stay to the right (probably because most are right-handed). I have no idea whether this is true or not (I’m also not sure whether British folks will find this offensive, in which case I apologize). I just think you’ve got to pick one and stick to it. You wouldn’t find a random escalator or walkway in a high-traffic zone in the US where there are signs directing traffic to stay left.
Thing # 2: One of the things I really like about London is how much ground floor commercial space there is. In New York City the ground floor is almost entirely retail and office work happens somewhere between the 2nd and 100th floor. I’m not sure why I like looking in at people working, but there’s something really interesting about walking past an office window during the day. It’s just not a view you really get in New York. (I’d say this has something to do with the fact that we’re looking for a new office so I’m especially keen to see how other’s deal with their space, but this has fascinated me since well before I started a company.)
Alright, that’s it. Two very random observations.
After last year’s NBA playoffs I got really into the NBA. I attribute it to two big things: First, the busier I am at work the more I want to just go home and veg out and the NBA makes it easy with things to watch every night and second, this season (and last year’s playoffs) is just good basketball.
Anyway, there’s a movement in the NBA (and every sports league at this point) about “advanced metrics”. It’s each league’s attempt to apply Moneyball principles to their sport. In basketball a big part of the point of these type of metrics is to answer the question of how much points are really worth. This is because the public gives an outsized amount of attention to guys that score a lot and not to how they actually get their scoring done (in other words, is someone who scores 30 points on 10 of 15 shooting better than someone who scores 40 points on 15 of 35 shooting). (If you’re bored of this now you can drop off, I won’t be offended.)
A site I enjoyed called The NBA Geek put together a nice primer on this question (and the point of advanced metrics generally). The point he makes is that each missed shot has a price and we need to take that into account in the same way we count the made ones. Regardless of the method of counting you use, you’ve got to be able to accept that basic idea. He sums it up like this:
But one thing is clear, to me at least: just because a player has great talent and is clearly capable of creating easy scoring opportunities, this does not make their bad shots “valuable”. The simple fact is, Carmelo Anthony would be a more productive player if he simply stopped taking shit shots; so would Russell Westbrook. The idea that the bad shots that these players take create value for their team has no basis in evidence at all (nor is there any evidence that these players are reluctant shooters who are shooting so much because “someone has to take the shots”). You can choose to disagree with me on that, but it’s rather like disagreeing with me about evolution and creationism — as far as I’m concerned, prove it or move it.
I know I post these every so often, but today we announced that we’ve raised a $9 million Series A. This is a big number and what it means most is that Percolate is very much hiring. We’re pretty much hiring across the board, but here’s a quick rundown of the current open positions on the site:
- Account Executive: This is the title we have for our more senior sellers. The job is about getting in front of Fortune 500 brands and helping them understand the value of Percolate.
- Engineer: We’re hiring for both Jr. & Sr. engineers (as well as frontend). We are a technology company first-and-foremost and hiring the best engineers is part of what we need to do to succeed.
- Designer: We have a top-notch design team here and really believe that the product is dependent on keeping that quality as high as possible.
We’re hiring for some other positions as well and you should check out the whole list, but those are some of the more pressing ones. If you know someone who would be awesome please send them our way.
I’m going to write about NASCAR and marketing, if you don’t care about either of those things, you can quit reading now. I’m writing this for three reasons:
- I’m a big NASCAR fan (it’s more than just going around in circles, I’m happy to explain it sometime)
- I spent some time working with a NASCAR team and learned a lot about how the business side of the sport operates
- The New York Times had a story yesterday about how the NBA can learn something from NASCAR in regards to it’s thought about adding sponsors to jerseys. This article almost entirely missed the real point of NASCAR sponsorships. (I can’t say I find this shocking as NASCAR is hardly the number one beat for the Times.)
For some reason the article focused on how sponsors can affect the behavior of the athletes. This is sort of interesting, but pretty far from the real story of NASCAR sponsorships. While the business of NASCAR is struggling for a bunch of reasons (financial meltdown, arms race in technology raising the cost of fielding competitive teams, more competition than ever for ad dollars), what makes it work has not changed. When a brand buys into a NASCAR sponsorship (which goes for ~$20 million for a full season), they are buying two big things: Loyalty and activation opportunities.
Let’s start with loyalty. This is what the article really misses. When brands sponsor NASCAR they get a real understanding from the fans that they are responsible for the car on the track. The drivers get it, the teams get and the fans get it. This is hugely different from slapping your logo on something (whether it’s soccer where it’s displayed in giant form on the player’s belly or basketball, where they seem to be thinking about some little sponsorship patch). People in those sports think the sponsor is responsible for the team in the same way no one will ever walk into a Brooklyn Nets game and say “thank you Barclays for making this possible.”
The numbers in NASCAR back this up. I used to have them, but the league and teams generally trot around a number of 80%+ loyalty of a fan to its driver’s sponsor. If Jimmie Johnson is your guy you go to Lowe’s not Home Depot. That’s just how it works.
Okay, onto activation. Take a look at the official sponsors of NASCAR teams and you see a few different kinds of companies: Car-related companies (NAPA, Shell, Mobil 1), CPG (Budweiser, Mars, Miller Lite) and a lot of retail/franchise businesses (Burger King, Target, GEICO, Farmers Insurance, Home Depot, Lowes, Office Depot). The first set is obvious, the average NASCAR fan likes cars and car-related stuff. The second is about audience as NASCAR skews heavily male and sometimes guys are hard to reach. The last, though, is the most interesting to me.
What all these companies have in common is lots of employees (you could throw FedEx in this group too and UPS was a long-time sponsor of the sport). One of the more interesting things about how brands actually utilize their sponsorship is that they do fully integrated program where they use a sponsorship to reach not just consumers, but also employees. Target, Home Depot and Lowes have 900,000 combined employees (365, 331 and 204). That’s a lot of people to keep happy. One of the ways they do it is give them something to root for. It’s not shocking, or even all that interesting, it just sort of makes sense and means that the investment is offset into a few different departments.
Anyway, I don’t have a real conclusion to all this, just felt like writing a little bit about what I know about NASCAR. Hopefully it was relatively interesting.
A few years ago I had a story written about me. The premise was a journalist went and did a bunch of research about me and then approached me with all she had collected to get my reaction. Unfortunately, the publication she wrote it in is now defunct and so she reposted it over at Forbes today with the following intro:
I wrote this magazine piece back in 2009 when I was first delving into privacy issues in the digital age. It was published in 2010 in the Assembly Journal. However, a Twitter user recently pointed out to me that the piece is no longer online… which is rather sad for a piece about online privacy. “Confessions of an Online Stalker” was the headline my editors chose. I would have named it “Confessions of a Digital Lurker.” Here it is in all of its dated glory.
At the time I actually wrote a response to her piece which was also published in the magazine, and thus is also now missing from the web. Since Kashmir, the author, has reposted her piece I thought it might also be a good idea to repost my response:
The last issue of the magazine featured a piece titled Confessions of an Online Stalker. Its author, Kashmir Hill, “stalked” me, collecting all the information publicly available on the web about my life and presenting me with my dossier over a cup of coffee in Soho. Included were some basic facts (age and address), interests (most-listened to songs and books on my Amazon wish-list) and the occasional tidbit that was unknown to me (the value of my parents’ house, for instance).
When I was asked to write a response, I wasn’t sure one was warranted. The article actually captures my reaction fairly well. I wasn’t all that surprised about any of the information the author dug up, as I could identify the source of almost all her data points. And while it certainly is a bit uncomfortable to see them (or hear them) together, given the motive of the exercise, it was not all that frightening. But there is a bit of context I’d like to add: it’s the sort of story that raw data doesn’t always tell.
I work and live on the web. I play with just about every new site I can get my hands on and post a fair amount of information that I don’t consider to be particularly personal about myself. I started a blog six years ago because I was writing for a magazine and found I had more to say than could fit in my 2,500-word monthly limit. I explored the medium and posted things that I now look back on and smack myself in the head over because of their asininity. But back then, as well as now, my job was to understand, or at least to have an opinion on, the state of digital media, on how and why people use the web.
But all of that sounds much more clinical than the reality of the situation. It’s been my opinion for some time that by putting things out into the world for public view, I’ve made my life more interesting (mostly by the friends that content has connected me to). In fact, I met my wife because of my blog. Let me explain.
On July 12, 2006 I wrote an entry asking if anyone from my blog world wanted to meet up in New York and have coffee. I got one response from a guy named Piers who ran (and still runs) a trend blog called PSFK. From there we developed an idea for a coffee meetup we decided to call likemind. About a month later, after holding two likeminds, a blogger in London named Russell Davies wrote a post praising the idea. In the comments to that post, a woman named Johanna mentioned that she was moving to New York City and was excited to go to likemind. Attached to her comment was her url, which I followed to an email address that I used to welcome her to the city and invite her to likemind. Three months later, when I was on the hunt for a new job, I mentioned it to Johanna, who had since moved north, attended a few likeminds and become a friend. She suggested that I come speak to the folks at the company she worked for: Naked Communications, a marketing strategy firm that was started in London. I went for it and two months later (it’s February, 2007 at this point) I announced I was joining the company as a strategist. I became friends with, and later started dating, Leila Fernandes, another strategist at the company. Two months ago we were married in Queens. Johanna helped us celebrate.
All of that is a long way of saying I see a lot of value in the sharing of information online. I am not in the camp that believes technology is pulling us apart, but rather that it offers us never-before-possible opportunities to come together and meet people you’d otherwise never have a chance to meet. I also don’t reside on the side that argues privacy is dead. While the author was able to collect a lot of information on me, there wasn’t much in there I hadn’t chosen to post myself with an understanding of the implications (not to mention the vast majority of it could have been collected in the pre-web days, albeit in a much more time-consuming manner).
One of my favorite digital thinkers, Danah Boyd, recently had this to say on the subject:
Privacy isn’t a technological binary that you turn off and on. Privacy is about having control of a situation. It’s about controlling what information flows where and adjusting measures of trust when things flow in unexpected ways. It’s about creating certainty so that we can act appropriately. People still care about privacy because they care about control. Sure, many teens repeatedly tell me “public by default, private when necessary” but this doesn’t suggest that privacy is declining; it suggests that publicity has value and, more importantly, that folks are very conscious about when something is private and want it to remain so. When the default is private, you have to think about making something public. When the default is public, you become very aware of privacy. And thus, I would suspect, people are more conscious of privacy now than ever. Because not everyone wants to share everything to everyone else all the time.
The control Boyd was referring to is probably slightly easier for me than most. When something happens like Facebook’s latest changes to their privacy settings, about thirty of the hundreds of blogs and other new sources I subscribe to write in-depth stories on the implications. Within hours of the changes I had been to the new settings page and tweaked everything to my liking, including deciding to keep certain information out of the public eye. I recognize this is not the norm, but it’s this kind of awareness that shapes my views on the sharing of information.
At the end of the day a breach of privacy requires some reasonable expectation that something would be kept private. Not only did I not have that expectation, but for much of the information I put on the web I hope for exactly the opposite.
For those wondering what I’ve been up to lately, here’s a talk I did at the Media Evolution Conference in Malmo, Sweden about the interest graph for brands.
[Editor’s Note: I try not to do these often, but since lots of you are from in and around the marketing industry I thought I’d post this job here as well.]
We’re hiring a brand strategist at Percolate (amongst other positions). The role isn’t to be a planner in the way you would be in an agency, but rather to take those same skills and help onboard clients, help them understand content opportunities/how to use Percolate best and help build out products that can help systematize parts of the brand’s content strategy. Basically we’re looking for someone who really understands how brands work, isn’t afraid to go in front of a client and present and has a mind for making products (which is essentially about looking at what you’re doing by hand and thinking about how to translate that into something that can be done repeatedly by computers).
This is a pretty good job for someone who has worked at an agency and wants to go try something different. I don’t want someone so senior that they’ve forgotten how to dig in and actually do work (not that there’s anything wrong with that, but we’ve all run into those folks and they’re not so helpful to have around). It’s a fulltime gig. I’d say the salary is mid-level, but it also includes equity (like all jobs at Percolate).
While I’m here and talking about jobs I should also mention that we’re looking for a few other positions as well and if you recommend someone for any of these and they get hired I’ll buy you an iPad (this is a NoahBrier.com offer only, so make sure you mention it):
- Backend developer: If you know someone who writes good code we want to talk to them. We do our stuff in Python, but if they’re awesome we’ll talk.
- Sales: We’re looking for people who can go in and help us tell the story of Percolate and really help us sell. We’re building an awesome team and a great culture around sales. I need to write a whole blog post about this, but watching the sales team build out their processes is a pretty amazing thing.
If any of this sounds like you (or someone you know) please hit me up either on my contact page or via firstname.lastname@example.org.
So apparently Jonah Lehrer plagiarize himself (or something like that). I’ve read a bit about it (not enough to have an opinion), but of course Felix Salmon has and takes the opportunity to dive into a comment from Josh Levin at Slate that Lehrer’s Frontal Cortex blog (one of my favorites) is to blame. The argument, essentially, is that if you’re “an idea man” like Lehrer a blog places too much stress on content creation.
Felix, as is frequently the case, disagrees: “Lehrer shouldn’t shut down Frontal Cortex; he should simply change it to become a real blog. And if he does that, he’s likely to find that blogs in fact are wonderful tools for generating ideas, rather than being places where your precious store of ideas gets used up in record-quick time.” What’s more, he dives in on a few suggestions for what to do with the blog and in turn makes some really interesting comments about blogging generally. I especially like his first point:
Firstly, think of it as reading, rather than writing. Lehrer is a wide-ranging polymath: he is sent, and stumbles across, all manner of interesting things every day. Right now, I suspect, he files those things away somewhere and wonders whether one day he might be able to use them for another Big Idea piece. Make the blog the place where you file them away. Those posts can be much shorter than the things Lehrer’s writing right now: basically, just an excited “hey look at this”, with maybe a short description of why it’s interesting. It’s OK if the meat of what you’re blogging is elsewhere, rather than on your own blog. In fact, that’s kind of the whole point.
I always thought of this blog as a thing I use to think out loud. It doesn’t overwhelm me because it helps me think through ideas (and in turn create new ones).
This is a cross-post from the Percolate blog. I try not to do this too often, but when it seems like it will be worth sharing I’ll go for it. If it’s annoying let me know and I’ll stop.
We talk about the idea that you must consume content to create content a lot around here, and I wanted to share a little anecdote that I’ve been using in presentations lately.
When Twitter first launched the big joke was that it was a place where people shared what they had for breakfast. Twitter fought tooth and nail against this idea, trying to explain that the service was actually much more serious than that.
But it’s not.
And that’s not a bad thing.
The way I see it, Twitter is just a big platform of what we had for breakfast. Except it’s not food, it’s what we ate on the web. A large proportion of Tweets have a link in them and those links are to whatever that person consumed moments before. It might be a Huffington Post article for breakfast or a YouTube video for lunch, but it’s still just what we ate. We are turning consumption into production.
My friend Grant McCracken wrote about social as exhaust data a few years ago and I think that’s a really nice way to think about it. Essentially what we’re seeing is a digested view into the lives of people and (increasingly) brands. Their social footprint is just that: a footprint. It’s the thing they leave behind after they take a step.
Four years ago (wow, that’s insane), I launched Brand Tags with a short little blog post on this site. Here’s what I wrote at the time:
In lieu of actually writing something interesting (which I haven’t done in a while), I’ve decided to release a 70% done project. It’s called Brand Tags and the idea is simple: You tag brands with the first thing that comes to mind. The idea came to me as I was working on my Brand vs. Utility presentation a few months ago. The thinking went something like this: If brands exist as the sum of all thoughts in someone’s head, then if you ask a bunch of people what a brand is and make a tag cloud, you should have a pretty accurate look at what the brand represents.
What happened after was all a bit of a whirlwind. There are about 30 comments on that post and the experiment ended up getting a lot of press (including an NPR interview, which is still the coolest media moment I’ve ever had). It was exciting and amazing and taught me lots about building a product and how people think about brands.
Two years later things had died down pretty significantly, partly out of my own interest waning and partly out of my inability to keep the scale of responses high without a steady supply of press. At that time I was approached by Ari Jacoby, who was working on a new company called Solve Media, which asked consumers to type in a brand message instead of a bunch of squigly letters in a CAPTCHA. Solve was interested in buying brand tags and was excited about offering up the tagging input across the web as part of it’s CAPTCHA program. I was excited to see my baby get a new life (and, obviously, to also get some money for what I had built).
We struck a deal and I became a shareholder in Solve. I also got some more confidence in my bank account, eventually leading me to make the leap to startup life and Percolate.
I say all that because Solve Media just announced the deal as well as their relaunch of the product today:
Enter Solve, which took some time to think about the best implementation of Brand Tags and then started building up the database of brand descriptions by rolling out this type of Captcha to 0.25% of its Captcha inventory — enough to generate tens of thousands of user-generated responses about a brand a day, Mr. Jacoby said. “We can get an unusually sample size overnight,” he said. The premise of Brand Tags is that a consumer’s perception of a brand is in fact reality, and one that could help measure the effectiveness of brand advertising online.
I’m excited to see something I built continue to grow on its own. I’m also excited that they finally got around to building the number one most requested feature: Export a tag cloud as an image for a PowerPoint presentation.
An aside in the book I’m reading sparked a thought I figured might be worth sharing. First, the snippet:
From our e-mail providers to our mobile-phone carriers, most companies’ business models are too lucrative to risk by mishandling our personal information and angering the consumer. So it is safe to say that despite the many potential risks represented by the volumes of data available, our past is relatively well safeguarded.
Which reminded me a lot of economic definition of brand. Here’s The Economist’s dictionary of terms on the meaning of brand:
Many economists regard brands as a good thing, however. A brand provides a guarantee of reliability and quality. Consumer trust is the basis of all brand values. So companies that own the brands have an immense incentive to work to retain that trust. Brands have value only where consumers have choice. The arrival of foreign brands, and the emergence of domestic brands, in former communist and other poorer countries points to an increase in competition from which consumers gain. Because a strong brand often requires expensive advertising and good marketing, it can raise both price and barriers to entry. But not to insurperable levels: brands fade as tastes change; if quality is not maintained, neither is the brand.
A brand is a promise: The more valuable it is, the less a company can afford it to be broken.
I wonder, though, whether that’s as true now as it was in earlier times. The example I’ve heard most for thinking of brands in this is not killing your customers. You pay more for a Pepsi than some random house brand because you know it won’t be poisoned (you also know it will always taste the same). But something seems to be changing, especially with digital brands. Maybe it’s that there’s more of them or maybe we have far lower expectations, but I feel like large brands frequently have data breaches or other terrible things and we forgive them in a way that doesn’t really jibe with the two paragraphs above.
If we don’t hold our brands responsible, the very meaning of brand changes. Part of it is that it’s easier to show outrage than it ever was, so when people get up in arms about Facebook’s latest privacy change I suspect it’s not real. Part of it may be the insanity of the news cycle: TJ Maxx loses millions of credit cards and its only a big deal for a day. But none of it explains how a bunch of banks that nearly sunk the economy are able to bounce back (except, maybe, regular brand laws don’t apply to oligopolies).
No matter what, something is different and its important that we understand what it means.
[Editor’s Note: Prepare for some jumping between thoughts here.]
First off, I’m trying to blog more, which you’ll be able to tell by the fact I’ve written a few things over the last few days. Whether this actually stays consistent only time will tell.
Second (and actual point of this post) thing, I want to connect a few pieces together about video games I’ve run in to. I don’t have answers, but I think it’s interesting. So here it goes.
Nicholas Carr linked to a scathing review of the effects of casual/social games and gamification by Rob Horning:
Gamification is awful for many reasons, not least in the way it seeks to transform us into atomized laboratory rats, reduce us to the sum total of our incentivized behaviors. But it also increases the pressure to make all game playing occur within spaces subject to capture; it seeks to supply the incentives to make games not about relaxation and escape and social connection but about data generation. The networked mediation of games — in other words, playing them on your phone or through Facebook — undermines the function of games in organizing face-to-face social time, guaranteeing presence in an unobtrusive way. Instead we typically take our turn in mediated games on our time and play multiple games at once, to cater to our convenience and our desire to be winning at least one of them.
Which reminded me a lot of this article from late last year about Cow Clicker, a satire of games like Farmville that against the designer Ian Bogost’s hopes actually became popular itself. Here’s how Cow Clicker worked:
The rules were simple to the point of absurdity: There was a picture of a cow, which players were allowed to click once every six hours. Each time they did, they received one point, called a click. Players could invite as many as eight friends to join their “pasture”; whenever anyone within the pasture clicked their cow, they all received a click. A leaderboard tracked the game’s most prodigious clickers. Players could purchase in-game currency, called mooney, which they could use to buy more cows or circumvent the time restriction. In true FarmVille fashion, whenever a player clicked a cow, an announcement—”I’m clicking a cow“—appeared on their Facebook newsfeed.
And what happened next:
And then something surprising happened: Cow Clicker caught fire. The inherent virality of the game mechanics Bogost had mimicked, combined with the publicity, helped spread it well beyond its initial audience of game-industry insiders. Bogost watched in surprise and with a bit of alarm as the number of players grew consistently, from 5,000 soon after launch to 20,000 a few weeks later and then to 50,000 by early September. And not all of those people appeared to be in on the joke. The game received its fair share of five-star and one-star reviews from players who, respectively, appreciated the gag or simply thought the game was stupid. But what was startling was the occasional middling review from someone who treated Cow Clicker not as an acid commentary but as just another social game. “OK, not great though,” one earnest example read.
Which brings me to this snippet from a pretty good Atlantic profile of video game designer Jonathan Blow:
As a developer whose independent success has emancipated him from the grip of the monolithic game corporations, Blow makes a habit of lobbing rhetorical hand grenades at the industry. He has famously branded so-called social games like FarmVille “evil” because their whole raison d’être is to maximize corporate profits by getting players to check in obsessively and buy useless in-game items. (In one talk, Blow managed to compare FarmVille’s developers to muggers, alcoholic-enablers, Bernie Madoff, and brain-colonizing ant parasites.) Once, during an online discussion about the virtues of short game-playing experiences, Blow wrote, “Gamers seem to praise games for being addicting, but doesn’t that feel a bit like Stockholm syndrome?” His entire public demeanor forms a challenge to the genre’s intellectual laziness.
Now I’m not sure how I feel about any of this really. I’ve found myself trapped by games, unable to put down the controller until my hands were so sore I was worried about doing permanent damage. I’m not proud of the fact that I was totally obsessed with Ski Safari (I’ve almost broken the habit). I think it’s good that there is another side to the endless games are great conversation (other than the side that says the people who talk about gamification are dumb). Not sure I have more of an answer than that at the moment.
One more thing from the article about Blow before I’m done. I particularly liked this explanation of how video games are really like movies. We frequently talk about how when a new medium is created the first thing people try to do is recreate the old medium. It’s logical and the examples people trot out (first TV broadcast was radio in front of the camera), it’s never really well explained. Thought this was pretty good:
Blow’s refusal to explain the meaning of his games, after all, stems from a profound respect for his art. Ever since modern technology first made sophisticated video games possible, developers have assumed that the artistic fate of the video game is to become “film with interactivity”—game-play interwoven with scenes based on the vernacular of movies. And not just any movies. “The de facto reference for a video game is a shitty action movie,” Blow said during a conversation in Chris Hecker’s dining room one sunny afternoon. “You’re not trying to make a game like Citizen Kane; you’re trying to make Bad Boys 2.” But questions of movie taste notwithstanding, the notion that gaming would even attempt to ape film troubles Blow. As Hecker explained it: “Look, film didn’t get to be film by trying to be theater. First, they had to figure out the things they could do that theater couldn’t, like moving the camera around and editing out of sequence—and only then did film come into its own.” This was why Citizen Kane did so much to put filmmaking on the map: not simply because it was well made, but because it provided a rich experience that no other medium before it could have provided.
I’ll leave you with that. Lots of thoughts about video games. No answers.
Yesterday I wrote about David Grann’s amazing New Yorker essay on William Morgan, an American revolutionary in Cuba. While I was reading I remembered thinking to myself, “that’s a great sentence, I should blog that,” but then I couldn’t find it again when I finished (I should have just underlined it in the magazine). Anyway, it came back to me last night and I wrote myself a note (only to not be able to find that … can’t figure out which of my three different self-organization systems I sent it to). On rummaging around I just found it agin. (Italics are mine to denote the sentence I’m particularly fond of.)
Hoover and his men tried to detect a hidden design in the data they were collecting. They were witnessing history without the clarity of hindsight or narrative, and it was like peering through a windshield lashed with rain. As Hoover confronted the gaps in his knowledge, he became more and more obsessed with Morgan. A former fire-eater at the circus! Hoover hounded his evidence men to “expedite” their inquiries, homing in on Morgan’s ties to Dominick Bartone. The mobster, whom the bureau classified as “armed and dangerous,” had recently been arrested with his associates at Miami International Airport, where they had been caught loading a plane with thousands of pounds of weapons—a shipment apparently destined for mercenaries and Cuban exiles being trained in the Dominican Republic.
Also, since writing yesterday’s post I was informed that David Grann also wrote the amazing Guatemala murder article from the New Yorker last year (which I included in my top longform of 2011) as well as one of the most fun books I’ve read in a long time: Lost City of Z. He also has a new book out called The Devil and Sherlock Holmes and my fine friends over at Longform.org have compiled a reading list of his finest writing. Awesome awesome awesome.
Was poking around my Kindle highlights (looking to see if there was a way to export them easily) and I ran across a quote from Zlatan Ibrahimovic’s biography “I Am Zlatan”. I was going to post that and then I thought, maybe I should just post lots of sports stuff in one big post, so that’s what I’m doing. No rhyme or reason here, just some interesting sports-related stuff I’ve run into lately.
First the quote from Zlatan on a player’s relationship with their team:
The management owned my flesh and bones, in a sense. A footballer at my level is a bit like an orange. The club squeezes it until there’s no juice left, and then it’s time to sell the guy on. That might sound harsh, but that’s how it is. It’s part of the game. We’re owned by the club, and we’re not there to improve our health; we’re there to win, and sometimes even the doctors don’t know where they stand. Should they view the players as patients or as products in the team? After all, they’re not working in a general hospital, they’re part of the team. And then you’ve got yourself. You can speak up. You can even scream, this isn’t working. I’m in too much pain. Nobody knows your body better than you yourself.
Everything from Grantland has been amazing lately. I think that’s the best site going on the web right now. It houses my favorite sportswriter, Brian Phillips (if you haven’t read it, I can’t recommend his ~100 part series of his Football Manager escapades), everything else is generally excellent, and I read the funniest thing I’ve read in awhile there recently. Here’s Bill Simmons on Dexter Pittman’s flagrant foul at the end of Miami/Indiana game 5 (here’s the video in case you missed it):
Dexter: “Yeah, that!”
LeBron: “I saw it, thanks for that. You’re probably getting suspended, though.”
Dexter: “Yeah, but he’ll never give you the choke sign again, that’s for sure! I SHOWED HIM!”
LeBron: “You sure did, Darius.”
LeBron: “I mean Dexter.”
Dexter: “If you want, I could try to run him over in the parking lot as he’s walking to the Pacers’ bus.”
LeBron: “No, I think we’re cool.”
Dexter: “You want to grab something to eat?”
LeBron: “I can’t, I made plans.”
Dexter: “Want to play video games sometime?”
LeBron: “I don’t really play video games anymore.”
Dexter: “Well, if you ever want to hang, lemme know.”
LeBron: “Sure thing, Darius.”
In other NBA-related reading, Wages of Wins, which tries to put some science behind the ranking of players, has been excellent throughout the playoffs. Here’s how they explained Lebron’s play in case you were curious:
A superstar gives your team a five point edge being on the court. With this scale in hand let’s point something out. LeBron James has played 10 playoff games so far this season. In 4 of them, he’s put up a PoP of +10!
Lebron is playing twice as good as a superstar in the playoffs. That’s mind boggling. Oh, and before I finish the basketball section, the New Yorker wrote a little about former Knick, Latrell Spreewell.
On to soccer, put this on Tumblr earlier, but Michael Bradley’s goal against Scotland was magical. If you missed the insane last day of Premier League soccer in the UK, I highly recommend reading 200 Percent’s recap.
And since I’m writing about sports, if you’ve never read it, go back and read David Foster Wallace’s “profile” of Roger Federer from 2006. It’s magic.
That’s all, have a good Memorial Day.
I haven’t written a ton about starting Percolate, partly because I don’t want this to become a place where I just promote what I’m up to and partly because I’ve been so busy I haven’t had a lot of time to write (as I’m guessing you’ve noticed).
Well, now I’m on a train and I forgot my Verizon card at my last meeting and I decided it would be a good chance to get some things down. These are a bunch of random thoughts, as much for my own safekeeping as sharing.
Before I start, a bit of an update on Percolate: We have 15 people, our own office and a healthy roster of Fortune 500 clients. James (my co-founder) and I started the company last January (2011). Alright, onto the thoughts …
One of the funny things about starting a company (and growing it) is the milestones you set for yourself (or discover as you go). There’s the obvious ones (first employee, first client, first check in the bank), but then there’s the less obvious ones like first office (alright, maybe that’s an obvious one) and first employee who relocated to come work for you (we passed that one recently). Every time we hit one of these it’s a moment to reflect and think about how crazy the whole process of starting a company really is.
I’ve written this before, but it bears repeating. I can’t imagine EVER starting a company without a co-founder. I can’t recommend it highly enough to anyone thinking about being an entrepreneur. As far as choosing your co-founder I think there are a bunch of factors that has led to a really strong relationship between James and myself, including: A lot of respect for each other, clear roles (but also enough respect that when we move outside those roles it’s accepted) and an ability to disagree and be stronger for it (I wrote a short post about this but I think it’s hugely important, if you can’t argue productively with your co-founder, you shouldn’t start a company with them). There are lots of others, but those top my list.
There is a fundamental difference between being a person running a company and being an employee. As the one in charge your singular goal is to keep the company evolving (at least it’s true of a technology startup). Stasis equals death. You want your company to look totally different tomorrow than it does today. If you’re an emplooyee, you often want the opposite: You like where you came to work and you want that company to stay the same. I’m not sure how to resolve this disconnect and I never recognized it until starting Percolate.
Recruiting, Marketing & Press
All three of these happen all the time. They don’t ever stop and we’re going to make sure they remain that way even when the team performing these roles moves past just James and myself.
A Little Disagree Is a Good Thing
Teams shouldn’t always agree about everything. Having different perspectives is ultimately what’s going to force things to be stronger. Understanding the roles different folks on the team play (and helping them understand those roles) is really important.
I never did a whole lot of managing before I got to Percolate. I thought it was pretty fine to let people do their job and support them when they needed it. James introduced a bunch of ideas to me around being more active and it’s a strategy we’ve been trying to live as much as possible at Percolate. We set quarterly goals with each employee and meet at the end of the three months to grade them together. We have weekly meetings and do monthly surveys of employee satisfaction. None of this stuff is perfect and hopefully it will all evolve (especially as we continue to grow), but it has really helped me understand the value of a more active management approach.
I’m sure there’s lots more, but that’s what’s coming to mind right now. Hope this is somewhat helpful/interesting.
This post is the intersection of a few different things I’ve been thinking about lately. First is Percolate. Part of the process of introducing the company to new people is frequently recounting the story of where the product came from. James and I have probably sent each other a thousand different articles back and forth and I asked him recently for his list of top articles that really inspired his thinking in the space. The second thing is Robin Sloan’s Fish which is all about the difference between liking and loving content. It made me think about the list of the content and marketing-related articles I’ve read that I come back to frequently. This is that list. Some of these are newer and may not hold the test of time, but most of them are things I’ve come back to (at least in conversation) about once a month since I’ve read them (they are distributed over the last 10 years).
Without any further ado, here’s my list:
Stock & Flow
Not specifically about marketing, but it’s all about content. Stock and flow is how we’ve taken to thinking about content at Percolate and this is really where that idea came from. I’ve written a few things inspired by the idea and use it frequently to explain how brands should think about content (and why Percolate exists).
Many Lightweight Interactions
This is the most recent article of the bunch and comes by way of Paul Adams, who works in the product team at Facebook. It was a really nice way to explain a lot of the stuff I’ve been thinking and talking about with clients over the last five years. Specifically it talks about how the web (and specifically social) offer brands an opportunity to move from a world of few heavyweight interactions (stock in Robin’s parlance) to many lightweight interactions (flow). The one thing I’d add is that I think the real opportunity is to take the many lightweight interactions and use them to understand what works and inform the occasional heavyweight interactions brands need to succeed.
Who’s the Boss?
This was written by a friend of mine 10 years ago. It’s short, but the core point is that brand’s live in people’s heads. This was what inspired Brand Tags and has colored lots of my thinking about how brands behave.
Why Gawker is Moving Beyond the Blog
Not specifically about marketing, but Denton’s explanation of why he’s moved from the classic blog format is a great explanation of how content works on the web.
How Social Networks Work
Another slightly older one, this was the first time I had read someone talked about the idea of social as exhaust data (basically our digital breadcrumbs), which seemed like a really good way to think about it (and helped explain why brands struggled). Lately I’ve been using this to help explain why brands struggle in social: Exhaust data is a very human thing. You need to consume in order to create this trail and most brands don’t do that.
How Owned Media Changed the Game
From Ted McConnel who used to be head of digital at P&G. I really liked this quote: “Recently, in a room full of advertising brain trustees, one executive said, ‘The ‘new creative’ might be an ecosystem of content.’ Brilliant. The brand lives in the connections, the juxtapositions, the inferences, the feeling of reciprocity.” This was one of those articles that really wrapped up a bunch of stuff I had been thinking about. It’s nice when that happens.
That’s it for me. What would you add? What am I forgetting?
This is a cross-post from the Percolate Blog. I thought you all might enjoy reading it here as well.
Let me get something out of the way before we get started: In case you haven’t heard, Facebook is going to IPO this week.
Okay, seriously, all this IPO talk has driven people to dive into Facebook’s business model and lots of folks are coming up with doubts. As Peter Kafka points out, even Facebook has its doubts, mentioning as much in their IPO filing: “We believe that most advertisers are still learning and experimenting with the best ways to leverage Facebook to create more social and valuable ads.”
But what does that mean really? And what’s the opportunity? And, most importantly in many people’s eyes, does Facebook really have the opportunity to be a bigger company than Google?
While I don’t know the precise answers to those questions, I do have lots of opinions and since it happens to be Internet Week in NYC, I’ve been having these conversations a lot (mostly on panels). The bulk of the argument against Facebook revolves around their lack of “intent” data. This, of course, is what Google has in bulk and is the reason they are a multi-billion dollar business. Being able to target people at specific points in the purchase process changes the way marketing works. It allows advertisers to do something that was all but impossible (you could buy in-store and outdoor around stores, but that’s a whole lot less efficient). This is an amazing thing for marketers and Google’s market cap reflects it.
But if you ask most advertisers why they spend millions (and sometimes billions) on traditional ads, it’s not to harvest people who intend to buy, it’s to create demand: continuing to grow a business requires continuing to bring in new customers constantly. However it makes you feel, most ads exist to remind you that you need something new. That shoe company with billboard isn’t trying to get you to buy their shoes over a competitor, they’re trying to remind you that you need new shoes and, they hope, when you walk into the store you’ll spring for their brand.
That’s where brands spend real dollars. When startups show off “the chart” (you know, the one with the gap on time spent versus ad spend), they are looking at the effect of digital platforms not having a good answer to intent creation.
That, I believe, is where the opportunity for social is. We’re not there yet, but the promise is that you can use your understanding of a user’s interests to present them with messages that let them know about things they want before they want them. If Facebook figures this out it will be a bigger company than Google.
So how does content fit in?
Using the traditional purchase funnel, I think you still have a gap between awareness and intent. Once someone knows about your brand or product, how do you create need? One really good way of doing that is to remind them you exist (a large portion of CPG ad spend is used for just this). The way to remind people you exist is to create content they’ll see. To create content they’ll see on Facebook you need to a) be engaging enough that it builds organic activity and pushes beyond the base distribution you get through EdgeRank or b) buy Reach Generator. The two big goals (awareness and intent creation) have paid actions associated with them in Facebook, Twitter and Tumblr. If these companies continue to build on these ideas and find better ways to target users based on their interests they will be solving a real problem for advertisers, something that hasn’t really been done on the web since paid search in the early 2000s.
Of course, there are lots of ifs here. The products are not quite there yet (targeting, for instance, is still largely based on social connections instead of interest connections), but I think these platforms will get there and I think they’ll succeed.
All week I’ve been meaning to weigh in on the curation debate (David Carr, Matt Langer, Marco Arment, Matthew Ingram), but I’ve been busy and Percolate released its own take on the subject in the form of a video with some of our favorite web curators.
Okay, let me start at the top: Semantics. Matt Langer rightly points out the word curation is not being used correctly:
First, let’s just get clear on the terminology here: “Curation” is an act performed by people with PhDs in art history; the business in which we’re all engaged when we’re tossing links around on the internet is simple “sharing.” And some of us are very good at that! (At least if we accept “very good” to mean “has a large audience.”)
Early last year I agreed. But then I realized how boring and unproductive most semantic arguments were. Or as Maria Popova said last June:
Like any appropriated buzzword, the term “curation” has become nearly vacant of meaning. But, until we come up with a better one, it remains the semantic placeholder that best captures the central paradigm of Twitter as a conduit of discovery and direction for what is meaningful, interesting and relevant in the world.
I loved the idea of a semantic placeholder then, and I still do. If you’re going to wade into the semantic debate you need a better answer and editor isn’t it. For better or worse we are using curator to mean something different than it used to mean and, at least for now, that seems fine. As long as we all know what we’re talking about (the selection of internet things) then the word seems okay, let’s not hide behind the definition.
And before I continue, one more thing: For what it’s worth I define curation as people choosing things and aggregation as computers choosing things.
Great. Now back to the more important stuff. A lot of this conversation was kicked off by the Curator’s Code, which aims to encourage people to share the source of their information with some special symbols. Lots of folks, including Marco from Instapaper jumped on the idea as stupid and unsustainable and maybe it is. I think everyone involved would agree it’s not the perfect solution to the problem, but I do think it opened up an important conversation (I wasn’t involved, but I know the folks who are). How we credit one another on the web is an issue we’ve been working on forever and, as a few of the blog posts on the topic point out, the good news is that the hyperlink is the most efficient:
And we already have a tool for providing credit to the original source: It’s called the hyperlink. Plenty of people don’t use the hyperlink as much as they should (including mainstream media sources such as the New York Times, although Executive Editor Jill Abramson said at SXSW that this is going to change) while others misuse and abuse them. But used properly, they serve the purpose of providing credit quite well. How to use them properly, of course — especially for journalistic purposes — is another whole can of worms, as Felix Salmon of Reuters and others have noted. And when it comes to curation and aggregation, it seems as though curation is what people call it when they like it, and aggregation is what they call it when they don’t.
But it’s not quite good enough, and this is where I start to take issue with a few different things a few different people said. What I just did there is use a hyperlink to credit something I didn’t write. Except you probably didn’t mouse over the hyperlink and because it was in there I didn’t need to write that Matthew Ingram from GigaOm was responsible for those sentences. While I think it’s important to credit sources of information, I think the bigger thing to think about is how we’re crediting the original sources of content.
Which is why I took the most issue with Marco’s stance. Not because I disagreed with him (“The proper place for ethics and codes is in ensuring that a reasonable number of people go to the source instead of just reading your rehash.”), but because Instapaper represents one of the current dangers in lack of credit. While it doesn’t relate exactly to the question the Curator’s Code is addressing, it is part of the broader conversation we should be having: Who is getting credit when you consume a great piece of content?
After a long argument with Thierry Blancpain on Twitter I finally came to the question which seems to sit at the heart of the matter to me: Who gets credit when you read something awesome in Instapaper? Does it go to the publisher of the content or does it go to Instapaper. I know for myself (and the informal poll of friends I asked the question to), the answer is the latter. I don’t know the source of most of the content I consume in Instapaper. Sure I put it there when I hit the button, but when I consume it the source is entirely stripped away. I was talking to the publisher of a major magazine this week about the issue and the question I asked is, “if you’re losing the advertising and the branding, is there any purpose to letting your content live there?”
This isn’t to point the finger solely at Instapaper, I think this is true of almost all the platforms on the web. If all the incentive is towards sharing and all the credit goes to sharers, what will happen to creation? (I don’t really think it will go away, but I do think it creates a dangerous precedent.) One of the things I think is great about the Longform iPad app is that it connects me with the publishers of content. One day when they offer subscriptions (which I assume they will) I’d happily pay to keep getting my 3,000 word Grantland stories as I now know the true value (and I never forget it, because the publisher is always right next to the content). (Admittedly, the curators on the app pose a more complicated issue.)
I think part of it is that publishers are going to have to start carrying more branding in the stories. I’m not sure what this means, but if you’re reading something from The Atlantic, say, maybe they remind you throughout that this is from The Atlantic. It’s not ideal, but again, I think if publishers aren’t getting advertising revenue or branding credit with their stories there is no reason for them to support their travels around the web. I also think metadata comes into play, and while I don’t know what the best answer is quite yet, I think it’s important to start encouraging the display of more information about original sources on stories (again, not sure what that looks like, but I’ve been turning it over in my head).
This whole issue is obviously something I’m thinking a lot about at Percolate. I believe brands should be the best behaved of the bunch. I also believe brands have a responsibility to be both curators and creators: To increase the pool of original quality content on the web. No one is to blame for all this stuff, but we are all responsible to make sure that it’s solved before it’s too late.
When Piers Fawkes and I started likemind a few years ago it was on a bit of a whim, but it came to represent something we both really believed in: Starting to take online relationships offline. likemind was a place where people from the internet could meet and share ideas over a cup of coffee. As crazy as it is, that was five years ago and obviously quite a bit has happened in the meantime.
Over the last year or so likemind has lapsed a bit. Like any community it requires tending and life got in the way. But looking back, there was something bigger: There was just less need for a shared space for meeting internet people. Not that it isn’t important, but rather that it feels like everywhere is now that place.
So Piers and I sat down and talked about what happens next. What does the next five years of likemind look like? Is there even a next five years of likemind?
To answer the second question first: Yes, we both believe in the power of likemind. While there are many places to meet folks, we need more that aren’t explicitly about networking and instead are just about the sharing of ideas between interesting people. The thing that’s amazing about likemind is its self-selection. While we never defined what it meant, we always got the right people, all around the world.
So with that decided we talked about what we felt was missing and the answer we landed on was intersections. While there are countless meetups and conferences around the world, there are two few that do everything they can to bring people from different places together for conversations. Specifically, the the industries that we focus on – technology, creativity, media – seem to have diffused at exactly the time they should be coming together. We’re spending more and more time talking to the people who work on the things we work on at exactly the time we should talking to everyone else.
So that’s what we want to make likemind about. Call it likemind: The sequel. The mission is to give people from these different places a space to share ideas. Let’s make it happen.
I’m doing this “virtual panel” about content over at the new FastCo Create site. The first round is up (I’ll update this post as it goes up) and here’s a quick excerpt from my answer about what brands need to know about content:
I’m not actually sure that creating editorial content is all that different than creating promotional content, at least on a high level. Advertising is a process of combining brand outputs (look, feel, voice) with cultural inputs (insights, trends, etc.) and creating a piece of communication. The shift I see taking place is that the traditional processes around creating content for a world of campaigns break-down in a real-time content creation environment: Brands and agencies aren’t currently set up to consume culture as it happens, which is what media organizations do. I think this is a big shift we’ll start to see inside brands over the coming years. It’s not that they’ll try to model themselves on media organizations, but rather, they’re going to rearrange themselves around real-time consumption of content, data, analytics and anything else they can get their hands on to help make decision and communicate better.
I’ve thought lots about the topic of teaching code since I taught myself a few years ago. My thesis, which still stands, is that the best way to teach people who want to make things to write code is not to teach them about the details, but instead to help them make things. Some of the points in this article about how nobody wants to learn to program pretty much nails it:
But for the casually interested or schoolchildren with several activities competing for their attention, programming concepts like variables and loops and data types aren’t interesting in themselves. They don’t want to learn how to program just for the sake of programming. They don’t want to learn about algorithm complexity or implicit casting. They want to make Super Mario or Twitter or Angry Birds.
I’m also particularly fond of this bit on how new coders are plagiarists:
It’s okay if they don’t completely understand how a program works after they’ve played with it a little. Very few ideas are completely original. The more material you give your students to plagiarize, the wider the range of derivative works they’ll make from them.
Was just reading through some old Instapaper stuff and ran across this post celebrating what would have been Marshall McLuhan’s 100th birthday. It includes an excerpt of an interview McLuhan did with Playboy and this excellent explanation of McLuhan’s approach:
I’m not advocating anything; I’m merely probing and predicting trends. Even if I opposed them or thought them disastrous, I couldn’t stop them, so why waste my time lamenting? As Carlyle said of author Margaret Fuller after she remarked, “I accept the Universe”: “She’d better.” I see no possibility of a worldwide Luddite rebellion that will smash all machinery to bits, so we might as well sit back and see what is happening and what will happen to us in a cybernetic world. Resenting a new technology will not halt its progress.
I’ve written about this in the past, but I think one of the things that amazes me most about McLuhan was his ability to separate himself so well. The things he said that people struggle with most (like content doesn’t matter as much as the medium) they struggle with because it makes them uncomfortable as content creators (I’m speaking from personal experience here). Even reading what he said above makes me feel a bit uncomfortable as it feels like one shouldn’t just accept things … But who am I to say?
« Older posts