One of my favorite marketing stories to tell is about how when I was working at an agency early in my career we were doing research for one of the big consumer electronics companies. Specifically, we were testing a new commercial another agency had put together. The commercial was “edgy” (it had snowboarders!) and got high marks by all the random consumers who got pulled into a room in the mall to watch it. That is, until they were asked the last question: “What brand was it for?” To which they all replied with the company’s biggest competitor. The moral is simple, after all that time and money, a commercial had effectively been made for another company. (One of the most well-known stories of this is the famous ad with a gorilla tossing around soft-sided luggage which was for … American Tourister.)
Sharp has made it pretty deep into the marketing world, particularly with consumer packaged goods companies like Procter & Gamble. Lots of them have taken his advice (and consulting hours) and applied it to how they approach building their brands, particularly media buying (MOAR REACH). But what I found especially interesting about the Tide Super Bowl takeover is that they took things one step further, finding a way to apply Sharp’s principles to both the media and creative execution.
The media part is simple: According to Sharp (and lots of research AND COMMON SENSE), big brands need to be bought by lots of people and, for that to happen, they need to reach lots of casual buyers who may or may not be in the market to buy them. For all the talk about the death of TV advertising (which is hugely overstated), the Super Bowl is an incredibly unique media opportunity. Not only is it a gigantic audience, but it’s also the only time and place they’re actually excited to see ads.
On the creative side what Tide did was pretty obvious (they did explain it after all), but definitely not simple to pull off (imagine convincing a client you’re going to spend $16 million worth of airtime doing nothing original). They used the visual language of advertising, especially Super Bowl advertising, and found a way to link it all back to the brand. The value wasn’t really in the ad itself, but that you were watching every other ad looking for the tropes (and clean shirts of course). If the main goal of advertising is to create and own “brand assets”, Tide went above and beyond by reinforcing their own and finding a way to effectively hijack everyone else’s. What’s more, by splitting things up across the game in the way they did, they made it so you could never watch too many commercials without being reminded that they might be a Tide ad.
Outside of Tide, every other commercial felt pretty unremarkable to me (other than the Dodge/MLK thing, of course). That’s partially because it’s very hard to be unexpected when another company has already predicted your behavior, and partially because most of the themes brands are experimenting with around are the same ones they were playing with last year. For all the talk about the speed of change, brands, especially the big ones, are moving slow as they try to find a safe space in our ever-more polarized world. I suspect the transition will continue to take time.
Until then, we’ll almost definitely get more ads like the one from Toyota, which put a Jew, Christian, Muslim, and Buddhist in a car together with the tag line “We’re all one team.”
With that said, I’ve been doing plenty of reading and have compiled a pretty extensive list of favorite longform from 2017. To be clear on the format: I’ve broken the list down by a few big themes. My picks are based on my own preferences, meaning it’s not always the best piece of pure writing, but often the things that stayed rattling around in my head the longest. I’ve tried to contextualize things as much as possible (hence the length). This is one of my favorite things to write every year and I hope you enjoy (and let me know what articles I missed).
For those not ready to commit to my 7,000+ words of context, I’ve included all the picks at the bottom in chronological order (obviously I’d prefer you read all the way through).
Finally, in case you somehow get through all 50+ articles linked here and want more to read, here’s my lists from 2005, 2006 (part 1 & part 2), 2011, 2012, 2015, and 2016. Not quite sure what happened in those missing years …
I assume for many of us the year started out the same way: Trying to figure out what happened on November 8, 2016. I read everything I could find to help me wrap my head around Trump and the state of America, I searched for new voices that had insight into what was happening, and I tried to come to my own conclusion about questions like Russia. I kept an Evernote note titled “Trump Theory” where I copied quotes and links to articles that I felt said something genuinely different, interesting, or useful for understanding the moment.
In that search there were a few voices that felt like they separated themselves from the pack: Masha Gessen writing for the New Yorker and the New York Review of books, Maggie Haberman for The New York Times, as well as Adam Gopnik and Jelani Cobb at the New Yorker. Each offered lots of material for that Evernote note in the early days of 2017. Here’s a pick from each from those first three months (except for Haberman, who put together an insane article in December that was too good not to mention):
The question at the front of my mind was why didn’t I (and lots of others) see Trump coming, take him more seriously when I did, and, having missed it, how should I think about him and interpret his actions?
In the end my favorite Trump piece is probably “How Liberals Fell in Love with the West Wing” from Current Affairs Magazine. Partly because I like the article, and partly because Current Affairs represents a triumph in my search for new voices. At the beginning of the year I was madly searching for commentary on the left that wasn’t Pod Save America and its ilk (I just couldn’t do it after the collective October victory lap). My search yielded a bunch of stuff that was new to me but ultimately didn’t feel quite right for one reason or another. I started listening to Chapo Trap House (interesting, but way too bro-y), I tried Jacobin Magazine (good, but too socialist), and I got a subscription to n+1 (excellent, but too academic for a regular read). I also started reading Current Affairs, a magazine started in 2015 by Nathan Robinson, who was then a PHD student in Sociology and Social Policy at Harvard. Of everything new I discovered it felt most right to me: It was left of the left, but not so far left that I couldn’t see how the ideas could be implemented. It was also funny, which helped. I’m not sure the West Wing article is the best piece of writing I found in Current Affairs in 2017 (I’m guessing it’s not), but it felt like it perfectly nailed a feeling about the current state of liberalism, turned the focus from Russia and external influences to the left’s role, and ultimately burrowed an idea in my brain that I couldn’t extract. Here’s one of many bits that stuck:
It’s a smugness born of the view that politics is less a terrain of clashing values and interests than a perpetual pitting of the clever against the ignorant and obtuse. The clever wield facts and reason, while the foolish cling to effortlessly-exposed fictions and the braying prejudices of provincial rubes. In emphasizing intelligence over ideology, what follows is a fetishization of “elevated discourse” regardless of its actual outcomes or conclusions. The greatest political victories involve semantically dismantling an opponent’s argument or exposing its hypocrisy, usually by way of some grand rhetorical gesture. Categories like left and right become less significant, provided that the competing interlocutors are deemed respectably smart and practice the designated etiquette. The Discourse becomes a category of its own, to be protected and nourished by Serious People conversing respectfully while shutting down the stupid with heavy-handed moral sanctimony.
(For the record, I liked the West Wing, a lot. But I also can very much see how it helped to shape an idea about how politics works – or should work – that is far from the reality of what happens in Washington.)
Phew, glad we got Trump and politics (mostly) out of the way. I promise not every section of this is going to be that long. I’ve only got four articles in the tech category and none are about fake news, Uber, or Amazon. Before my pick, here are a few of my favorites that didn’t make the final cut:
Tim Harford had an excellent piece that ran in the Financial Times titled “What We Get Wrong About Technology” (July 8, 2017). Harford is one of my favorite writers and thinkers and he was covering one of my favorite subjects: How to understand the future of technology.
The stations—which didn’t so much scan as photograph books—had been custom-built by Google from the sheet metal up. Each one could digitize books at a rate of 1,000 pages per hour. The book would lie in a specially designed motorized cradle that would adjust to the spine, locking it in place. Above, there was an array of lights and at least $1,000 worth of optics, including four cameras, two pointed at each half of the book, and a range-finding LIDAR that overlaid a three-dimensional laser grid on the book’s surface to capture the curvature of the paper. The human operator would turn pages by hand—no machine could be as quick and gentle—and fire the cameras by pressing a foot pedal, as though playing at a strange piano.
In the end, though, my favorite article about technology was a profile of Claude Shannon by Rob Goodman and Jimmy Soni for Aeon (August 30, 2017). The piece dives into Shannon’s history and the invention of information theory, which still plays an incredibly important role in computing today. The article is well-told, covers a fascinating subject (both the person and his studies), and manages to explain very complex ideas simply (one of my favorite things). Here’s an excerpt:
Shannon’s ‘mathematical theory’ sets out two big ideas. The first is that information is probabilistic. We should begin by grasping that information is a measure of the uncertainty we overcome, Shannon said – which we might also call surprise. What determines this uncertainty is not just the size of the symbol vocabulary, as Nyquist and Hartley thought. It’s also about the odds that any given symbol will be chosen. Take the example of a coin-toss, the simplest thing Shannon could come up with as a ‘source’ of information. A fair coin carries two choices with equal odds; we could say that such a coin, or any ‘device with two stable positions’, stores one binary digit of information. Or, using an abbreviation suggested by one of Shannon’s co-workers, we could say that it stores one bit.
Imagine an artificial intelligence, [Bostrom] says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.
Universal Paperclips goes beyond imagining it and puts you in the drivers seat as the paperclip maximizer. It’s amazing, and interesting, and teaches you a lot about how the world works (one tiny non-spoiler-spoiler: all companies eventually become finance companies). As John Brindle explains in his excellent writeup (December 6, 2017) of the game:
When we play a game like Universal Paperclips, we do become something like its AI protagonist. We make ourselves blind to most of the world so we can focus on one tiny corner of it. We take pleasure in exercising control, marshalling our resources towards maximum efficiency in the pursuit of one single goal. We appropriate whatever we can as fuel for that mission: food, energy, emotional resources, time. And we don’t always notice when our goal drifts away from what we really want.
All of this sounds pretty highfalutin for a game about paperclips, but somehow it makes sense when you play.
If there were three or four big stories in 2017, one of them was definitely opioids. There were a ton of worthwhile pieces on the subject this year, but two stood out for me: A story about the family behind oxycontin and another about how a community in West Virginia, one of the hardest hit states, is fighting to save the lives of those addicted.
Patrick Radden Keefe’s “The Family That Built an Empire of Pain,” (October 30, 2017) tells the anger-inducing story of the Sackler family and their knowing exploitation of people’s addiction to their company’s drug, OxyContin. See if this makes you sick:
Perhaps the most surprising aspect of Quinones’s investigation is the similarities he finds between the tactics of the unassuming, business-minded Mexican heroin peddlers, the so-called Xalisco boys, and the slick corporate sales force of Purdue. When the Xalisco boys arrived in a new town, they identified their market by seeking out the local methadone clinic. Purdue, using I.M.S. data, similarly targeted populations that were susceptible to its product. Mitchel Denham, the Kentucky lawyer, told me that Purdue pinpointed “communities where there is a lot of poverty and a lack of education and opportunity,” adding, “They were looking at numbers that showed these people have work-related injuries, they go to the doctor more often, they get treatment for pain.” The Xalisco boys offered potential customers free samples of their product. So did Purdue. When it first introduced OxyContin, the company created a program that encouraged doctors to issue coupons for a free initial prescription. By the time Purdue discontinued the program, four years later, thirty-four thousand coupons had been redeemed.
The other shows the depth to which OxyContin, and the opioid addiction it brought with it, has devastated a place like West Virginia. In “The Addicts Next Door” (June 5, 2017), Margaret Talbot lays out the tragic tale of Berkeley County as a view into what’s happening throughout the rest of the state; “West Virginia has an overdose death rate of 41.5 per hundred thousand people. (New Hampshire has the second-highest rate: 34.3 per hundred thousand.) This year, for the sixth straight year, West Virginia’s indigent burial fund, which helps families who can’t afford a funeral pay for one, ran out of money.”
She describes the damage to West Virginia’s children – “One of the biggest collateral effects of the opioid crisis is the growing number of children being raised by people other than their parents, or being placed in foster care. In West Virginia, the number of children removed from parental care because of drug abuse rose from nine hundred and seventy in 2006 to two thousand one hundred and seventy-one in 2016.” – and how some doctors are teaching regular citizens to use Narcan, a drug that can immediately counteract the effects of an overdose: “[Dr.] Aldis taught his first class on administering Narcan on September 3, 2015, at the New Life Clinic. Nine days later, a woman who’d attended the class used Narcan to revive a pregnant woman who had overdosed at a motel where they were both staying. During the next few weeks, Aldis heard of five more lives saved by people who’d attended the class.”
I know there were many other portraits like this throughout the year, but this is the one I found most affecting.
Podcasts were a new addition to the list last year. Like many of you, I’m sure, I spend much of my commuting and dog-walking time listening to them. Much of my listening is pretty mindless sports stuff (I’m a big fan of tuning out to the NBA banality of Dunc’d on Basketball), but there’s plenty of really amazing writing, reporting, and interviewing in the 20 or so podcasts that I subscribe to and try to listen regularly.
With that said, a few weeks ago I realized that I didn’t have a good list of favorites and sent out a call on Twitter and Facebook asking for recommendations. What came back was amazing and a bunch of episodes that ended up in my own favorites came from those suggestions. Here’s a few picks:
Say what you will about Malcolm Gladwell, but the guy knows how to tell a story. Season 2 of his Revisionist History podcast was excellent and included two stand-out episodes for me: “A Good Walk Spoiled” (June 15, 2017) tells the story of how golf courses exploit tax loopholes to create giant private parks and “McDonald’s Broke My Heart” (August 10, 2017) tells the story of why Micky D’s changed the oil they use for their french fries.
Uncivil, a new Civil War podcast from Gimlet, offers the untold stories of race and the War. The show opened with an amazing episode, “The Raid” (October 4, 2017), telling the story of a covert operation you never learned about in your US History class.
Last, but not least, was More Perfect’s “The Gun Show” (October 12, 2017), which explains just how recently we came to interpret the Second Amendment in the way we do today. Here’s a little taste from the description: “For nearly 200 years of our nation’s history, the Second Amendment was an all-but-forgotten rule about the importance of militias. But in the 1960s and 70s, a movement emerged — led by Black Panthers and a recently-repositioned NRA — that insisted owning a firearm was the right of each and every American. So began a constitutional debate that only the Supreme Court could solve. That didn’t happen until 2008, when a Washington, D.C. security guard named Dick Heller made a compelling case.”
I know I’m pretty light here. Still making it through a bunch of recommendations. But that last episode, “The Gun Show”, is an angle on the gun debate I had never heard before and definitely stood out to me as the best I heard this year.
It’s pretty sad that this year requires a special category for the best article about the increased possibility of nuclear annihilation … but that’s where we’re at. I read two pieces that pretty clearly separated themselves from the competition in this regard. The first was Evan Osnos’s amazing (and very long, even for this list) piece “The Risk of Nuclear War with North Korea” (September 18, 2017). This pretty well sums up the vibe of the story (and situation):
Suddenly, the prospect of a nuclear confrontation between the United States and the most hermetic power on the globe had entered a realm of psychological calculation reminiscent of the Cold War, and the two men making the existential strategic decisions were not John F. Kennedy and Nikita Khrushchev but a senescent real-estate mogul and reality-television star and a young third-generation dictator who has never met another head of state. Between them, they had less than seven years of experience in political leadership.
The second piece, which was my favorite of the bunch, came from Michael Lewis with his Vanity Fair deep dive into the Department of Energy, “Why the Scariest Nuclear Threat May Be Coming from Inside the White House” (September, 2017). The details are astonishing and describe the governmental organization responsible for our nuclear capabilities as in complete disarray. This bit, about why you don’t want people who dreamed of working on nuclear weapons actually working on nuclear weapons, stuck with me the most:
The Trump people didn’t seem to grasp, according to a former D.O.E. employee, how much more than just energy the Department of Energy was about. They weren’t totally oblivious to the nuclear arsenal, but even the nuclear arsenal didn’t provoke in them much curiosity. “They were just looking for dirt, basically,” said one of the people who briefed the Beachhead Team on national-security issues. “ ‘What is the Obama administration not letting you do to keep the country safe?’ ” The briefers were at pains to explain an especially sensitive aspect of national security: the United States no longer tests its nuclear weapons. Instead, it relies on physicists at three of the national labs—Los Alamos, Livermore, and Sandia—to simulate explosions, using old and decaying nuclear materials.
This is not a trivial exercise, and to do it we rely entirely on scientists who go to work at the national labs because the national labs are exciting places to work. They then wind up getting interested in the weapons program. That is, because maintaining the nuclear arsenal was just a by-product of the world’s biggest science project, which also did things like investigating the origins of the universe. “Our weapons scientists didn’t start out as weapons scientists,” says Madelyn Creedon, who was second-in-command of the nuclear-weapons wing of the D.O.E., and who briefed the incoming administration, briefly. “They didn’t understand that. The one question they asked was ‘Wouldn’t you want the guy who grew up wanting to be a weapons scientist?’ Well, actually, no.”
(Quick note: I’ve separated this section from #MeToo as best as I can, though obviously there’s a lot of overlaps in the themes.)
In April, Rahawa Haile wrote “Going it Alone” (April 11, 2017), which describes her experience hiking the Appalachian trail during a moment of political chaos as a queer black woman. Here’s an excerpt:
The rule is you don’t talk about politics on the trail. The truth is you can’t talk about diversity in the outdoors without talking about politics, since politics is a big reason why the outdoors look the way they do. From the park system’s inception, Jim Crow laws and Native American removal campaigns limited access to recreation by race. From the mountains to the beaches, outdoor leisure was often accompanied by the words whites only. The repercussions for disobedience were grave.
In May, Ian Parker wrote “What Makes a Parent” (May 22, 2017) for the New Yorker, the story of a custody battle for an adopted son between a lesbian couple. The question at the heart of the case is what is a parent:
New York’s statutes describe the obligations and entitlements of a parent, but they don’t define what a parent is. That definition derives from case law. In 1991, in a ruling in Alison D. v. Virginia M., a case involving an estranged lesbian couple and a child, the Court of Appeals opted for a definition with “bright line” clarity. A parent was either a biological parent or an adoptive parent; there were no other kinds. Lawyers in this field warn of “opening the floodgates”—an uncontrolled flow of dubious, would-be parents. Alison D. kept the gates shut, so that a biological mother wouldn’t find, say, that she had accidentally given away partial custody of her child to a worthless ex-boyfriend. But many saw the decision as discriminatory against same-sex couples, who can choose to raise a child together but can’t share the act of producing one. Judge Judith Kaye, in a dissent that has since been celebrated, noted that millions of American children had been born into families with a gay or lesbian parent; the court’s decision would restrict the ability of these children to “maintain bonds that may be crucial to their development.”
“Asian-American” is a mostly meaningless term. Nobody grows up speaking Asian-American, nobody sits down to Asian-American food with their Asian-American parents and nobody goes on pilgrimages back to their motherland of Asian-America. Michael Deng and his fraternity brothers were from Chinese families and grew up in Queens, and they have nothing in common with me — someone who was born in Korea and grew up in Boston and North Carolina. We share stereotypes, mostly — tiger moms, music lessons and the unexamined march toward success, however it’s defined. My Korean upbringing, I’ve found, has more in common with that of the children of Jewish and West African immigrants than that of the Chinese and Japanese in the United States — with whom I share only the anxiety that if one of us is put up against the wall, the other will most likely be standing next to him.
In December, Wesley Morris (who wrote one of my favorites from last year, “Last Taboo”) profiled the director of this year’s breakout movie Get Out in “Jordan Peele’s X-Ray Vision” (December 20, 2017). I’m sure there’s a bit of recency bias with this pick, but I finally got around to watching Get Out this month and, despite my assumption it could never live up to all the hype, it was so interesting and weird that it definitely reached the bar. Then I read this article and Morris writes about race as well as anyone out there and Peele made a movie that tackles questions of race as interestingly as any in recent memory and when you put the two of them together you get stuff like this:
Peele had been talking about the restricted ways bigotry is discussed. “We’re never going to fix this problem of racism if the idea is you have to be in a K.K.K. hood to be part of the problem,” he said. The culture still tends to think of American racism as a disease of the Confederacy rather than as a national pastime with particular regional traditions, like barbecue. “Get Out” is set in the Northeast, where the racial attitude veers toward self-congratulatory tolerance. Mr. Armitage, for instance, gets chummy with Chris by telling him he’d have voted for Obama a third time. “Get Out” would have made one kind of sense under a post-Obama Hillary Clinton administration, slapping at the smugness of American liberals still singing: “Ding dong, race is dead.” Peele shows that other, more backhanded forms of racism exist — the presumptuous “can I touch your hair” icebreaker, Mr. Armitage’s “I voted for Obama, so I can’t be racist” sleeper hold are just two. But Clinton lost. Now the movie seems to amplify the racism that emanates from the Trump White House and smolders around the country.
As with all the other categories, each of these are deserving of a choice (and in the end I’m writing about them all because I think they’re all very worth reading), but I think if I had to pick one I’d go with Kang’s piece about a hazing death at an Asian-American fraternity. It tells a familiar story from a different perspective and draws a spectrum in a space we normally see as singular.
The year, at its outset, did not seem to be a particularly auspicious one for women. A man who had bragged on tape about sexual assault took the oath of the highest office in the land, having defeated the first woman of either party to be nominated for that office, as she sat beside a former President with his own troubling history of sexual misconduct. While polls from the 2016 campaign revealed the predictable divisions in American society, large majorities—including women who supported Donald Trump—said Trump had little respect for women. “I remember feeling powerless,” says Fowler, the former Uber engineer who called out the company’s toxic culture, “like even the government wasn’t looking out for us.”
Nor did 2017 appear to be especially promising for journalists, who—alongside the ongoing financial upheaval in the media business—feared a fallout from the President’s cries of “fake news” and verbal attacks on reporters. And yet it was a year of phenomenal reporting. Determined journalists—including Emily Steel and Michael Schmidt, Jodi Kantor and Megan Twohey, Ronan Farrow, Brett Anderson, Oliver Darcy, and Irin Carmon and Amy Brittain, among many others—picked up where so many human-resources departments, government committees and district attorneys had clearly failed, proving the truth of rumors that had circulated across whisper networks for years.
While the reporting was clearly amazing (it’s worth reading or re-reading all the articles mentioned by Felsenthal), two essays on the movement, where it came from, and what it means stood out for me. The first, from November by Claire Dederer in the Paris Review, asks “What Do We Do with the Art of Monstrous Men?” (November 20, 2017).
The second, which I just read this week, opened the Winter issue of n+1 magaizine. “In The Maze” (December, 2018) by Dayna Tortorici covers #MeToo, but also finds a string that goes back before 2017’s women came forward, to a general shift that has left some white men to feel like victims. She makes a compelling case that #MeToo is part of a much broader change happening in the United States and was a big component of the resentment that fueled Trump’s rise over the last few years. It ties together the themes of 2017 as well as anything I read this year:
Must history have losers? The record suggests yes. Redistribution is a tricky business. Even simple metaphors for making the world more equitable — leveling a playing field, shifting the balance — can correspond to complex or labor-intensive processes. What freedoms might one have to surrender in order for others to be free? And how to figure it when those freedoms are not symmetrical? A little more power for you might mean a lot less power for me in practice, an exchange that will not feel fair in the short term even if it is in the long term. There is a reason, presumably, that we call it an ethical calculus and not an ethical algebra.
Some things are zero sum — perhaps more things than one cares to admit. To say that feminism is good for boys, that diversity makes a stronger team, or that collective liberation promises a greater, deeper freedom than the individual freedoms we know is comforting and true enough. But just as true, and significantly less consoling, is the guarantee that some will find the world less comfortable in the process of making it habitable for others. It would be easier to give up some privileges if it weren’t so traumatic to lose, as it is in our ruthlessly competitive and frequently undemocratic country. Changing the rules of the game might begin with revising what it means to win. I once heard a story about a friend who’d said, offhand at a book group, that he’d throw women under the bus if it meant achieving social democracy in the United States. The story was meant to be chilling — this from a friend? — but it made me laugh. As if you could do it without us, I thought, we who do all the work on the group project. I wondered what his idea of social democracy was.
There’s a strict hierarchy of drivers, depending on what they haul and how they’re paid. The most common are the freighthaulers. They’re the guys who pull box trailers with any kind of commodity inside. We movers are called bedbuggers, and our trucks are called roach coaches. Other specialties are the car haulers (parking lot attendants), flatbedders (skateboarders), animal transporters (chicken chokers), refrigerated food haulers (reefers), chemical haulers (thermos bottle holders), and hazmat haulers (suicide jockeys). Bedbuggers are shunned by other truckers. We will generally not be included in conversations around the truckstop coffee counter or in the driver’s lounge. In fact, I pointedly avoid coffee counters, when there is one, mainly because I don’t have time to waste, but also because I don’t buy into the trucker myth that most drivers espouse. I don’t wear a cowboy hat, Tony Lama snakeskin boots, or a belt buckle doing free advertising for Peterbilt or Harley-Davidson. My driving uniform is a three-button company polo shirt, lightweight black cotton pants, black sneakers, black socks, and a cloth belt. My moving uniform is a black cotton jumpsuit.
There were two very different pieces from the Guardian. The first, “‘London Bridge is down’: the secret plan for the days after the Queen’s death” (March 17, 2017), laid out in amazing detail what will happen when Queen Elizabeth dies. The second, “Why we fell for clean eating” (August 11, 2017), goes deep into the weeds of the clean eating craze and just how crazy much of it is. It also includes what may be the most transferable sentence of 2017: “But it quickly became clear that ‘clean eating’ was more than a diet; it was a belief system, which propagated the idea that the way most people eat is not simply fattening, but impure.” Replace the words “clean eating” and “diet” and you have a pretty good descriptor for everything that seems to be happening around us right now.
But in the end, my very favorite of this category came from the article very hardest to categorize: Kathryn Schulz’s “Fantastic Beasts and How to Rank them” (November 6, 2017). I’m still not sure I can do this justice, but the basic premise is that our ability to rank the “realness” of imaginary beings like Bigfoot, the Loch Ness Monster, or ghosts, is a critical part of our humanity. I love this article partly because I can’t imagine pitching it to a New Yorker editor, partly because it was a nice break from the onslaught of 2017, and partly because it was just super interesting.
Patterns of evidence, a grasp of biology, theories of physics: as it turns out, we need all of these to account for our intuitions about supernatural beings, just as we need all of them to explain any other complex cultural phenomenon, from a tennis match to a bar fight to a bluegrass band. That might seem like a lot of intellectual firepower for parsing the distinctions between fairies and mermaids, but the ability to think about nonexistent things isn’t just handy for playing parlor games on Halloween. It is utterly fundamental to who we are. Studying that ability helps us learn about ourselves; exercising it helps us learn about the world. A three-year-old talking about an imaginary friend can illuminate the workings of the human mind. A thirty-year-old conducting a thought experiment about twins, one of whom is launched into space at birth and one of whom remains behind, can illuminate the workings of the universe. As for those of us who are no longer toddlers and will never be Einstein: we use our ability to think about things that aren’t real all the time, in ways both everyday and momentous. It is what we are doing when we watch movies, write novels, weigh two different job offers, consider whether to have children.
This hasn’t been my best year for blogging. My last post was June and, before that, January. Such is the life of an entrepreneur and new dad. However, while I haven’t found time to do the sort of writing I used to, I am happy to say I did a fair amount of reading this year and couldn’t let the holidays pass without sharing some of my favorite longform.
If you haven’t read one of these lists before (2011, 2012, and 2015), the basic gist is it’s a list of the stuff I read this year that I liked the most. Much of it is longform journalism written in 2016, though, as the internet is wont to do, there’s lots of older writing, podcasts, and who knows what else in the mix. (If you’re so inclined, I also have a Twitter account that just tweets out the articles I favorite in Instapaper.)
Without any further ado … (and in no specific order) … the list (lots more commentary below):
The Fighter – C.J. Chivers – New York Times Magazine – December 28, 2016
The AI Revolution: The Road to Superintelligence Part 1 & Part 2 – Tim Urban – Wait But Why – January 22, 2015
A few years ago I got the chance to spend some time with CJ Chivers, the New York Times war correspondent. His book, The Gun, had just come out and Colin, Benjamin, and I were helping to get him set up on social media. We spent the day hanging out, discussing journalism, signing up for accounts, and talking about how extraordinary war photographers are. Since then CJ has returned home and given up his role as an on-the-ground war reporter (a great longread from 2015) and his latest feature is actually in this weekend’s New York Times Magazine. The Fighter is a profile of former Marine Sam Siatta and his post-war struggles. What makes Chivers such an amazing person to cover war, beyond his ability to write and willingness to dig indefinitely for a story (he became the preeminent expert on ammunition serial numbers) is his profound respect for the military and the men and women who serve. Chivers served in the Marines in the 80s and 90s and brings that to every story he writes, but its intensified in a story about a person he clearly believes could have been nearly any Marine.
[The Fighter – C.J. Chivers – New York Times Magazine – December 28, 2016]
On artificial intelligence
The article that probably blew my mind the most was actually written in January 2015. I had heard about Wait But Why’s two–part primer on AI, but hadn’t gotten around to reading the 25,000 word tome quite yet. Once I did, I was not disappointed. I went from knowing basically nothing about artificial intelligence to being unable to carry a conversation without bringing it up. Tim Urban, the author of Wait But Why, read every book and article on the topic and ties it all together concisely (seriously) and with some excellent stick figure drawings. Warning: It’s heavy, like human extinction heavy. A snippet:
And while most scientists I’ve come across acknowledge that ASI [artificial superintelligence] would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
[The AI Revolution: The Road to Superintelligence Part 1 & Part 2 – Tim Urban – Wait But Why – January 22, 2015]
In a year of lots of bombast about immigrants (especially ones with the last name Khan), this incredibly well-researched profile of Zarif Khan, an Afghani who immigrated to Wyoming in the early 1900s, was an intimate profile of the immigrant story of America. The conclusion has to be one of my favorites from the year:
Over and over, we forget what being American means. The radical premise of our nation is that one people can be made from many, yet in each new generation we find reasons to limit who those ‘many’ can be—to wall off access to America, literally or figuratively. That impulse usually finds its roots in claims about who we used to be, but nativist nostalgia is a fantasy. We have always been a pluralist nation, with a past far richer and stranger than we choose to recall. Back when the streets of Sheridan were still dirt and Zarif Khan was still young, the Muslim who made his living selling Mexican food in the Wild West would put up a tamale for stakes and race local cowboys barefoot down Main Street. History does not record who won.
Like everyone else, I read a lot about politics this year. Most of it I would never care to subject anyone to again, but along the way there some pieces that stood out. To me, this New Yorker profile of Logan County, West Virginia was the best telling of America’s divide. It’s a story we all know at this point, but part of what makes this article work so well is it’s more than just about Donald Trump or income inequality or the rural/urban divide, it’s really the profile of a state and it’s unique culture.
Rounding out politics articles: The Case Against Democracy (New Yorker) provides context for why our system works the way it does and asks whether it could work better. This Election Was About the Issues (Slate) argues against the refrain that the election was about everything but the issues, suggesting that it was about the issues Americans actually care about:
I’m talking about issues that involve the fundamental arrangements of American life, issues of race and class and gender and sexual violence. These are the things we’ve argued about in the past year and change, sometimes coarsely, sometimes tediously, but very often illuminatingly. This has been, by all but the most fatuous measures, an issue-rich campaign.
Ezra Klein’s amazing profile of Hillary Clinton, Understanding Hillary (Vox), argued that the things that make her a great governor are the same things that make her a bad politician and gave me hope.
It turned out that Clinton, in her travels, stuffed notes from her conversations and her reading into suitcases, and every few months she dumped the stray paper on the floor of her Senate office and picked through it with her staff. The card tables were for categorization: scraps of paper related to the environment went here, crumpled clippings related to military families there. These notes, Rubiner recalls, really did lead to legislation. Clinton took seriously the things she was told, the things she read, the things she saw. She made her team follow up.
“The prescription that some offer, which is stop trade, reduce global integration, I don’t think is going to work,” he went on. “If that’s not going to work, then we’re going to have to redesign the social compact in some fairly fundamental ways over the next twenty years. And I know how to build a bridge to that new social compact. It begins with all the things we’ve talked about in the past—early-childhood education, continuous learning, job training, a basic social safety net, expanding the earned-income tax credit, investments in infrastructure—which, by definition, aren’t shipped overseas. All of those things accelerate growth, give you more of a runway. But at some point, when the problem is not just Uber but driverless Uber, when radiologists are losing their jobs to A.I., then we’re going to have to figure out how do we maintain a cohesive society and a cohesive democracy in which productivity and wealth generation are not automatically linked to how many hours you put in, where the links between production and distribution are broken, in some sense. Because I can sit in my office, do a bunch of stuff, send it out over the Internet, and suddenly I just made a couple of million bucks, and the person who’s looking after my kid while I’m doing that has no leverage to get paid more than ten bucks an hour.”
Beyond those two, I listened to a lot of Marc Maron’s WTF (always skip the first 10 minutes) and really enjoyed his interview with Louis Anderson, who I didn’t realize was a serious standup. (Part of why I really enjoy WTF is that it’s effectively a show about the creative process. When he goes deep with someone on how they do their craft I find it endless fascinating. While the Louis episode isn’t exactly that, it’s also just loads of fun to listen to anyone serious about anything talk to someone they so clearly respect.) Gladwell’s Revisionist History was pretty good (though sometimes a bit preachy). His episode on Generous Orthodoxy was just a very well told story (and when you’re done, go read the letter the show was based on).
As you may or may not know, I became a parent in 2015. Since my daughter was born I’ve been keeping a collection of parenting articles that don’t suck (a surprisingly hard thing to find actually). My favorite of 2016 was probably Tom Vanderbilt’s piece on learning chess with his daughter. It’s both a well-told story and some really good lessons on the differences in learning between adults and children. A snippet:
Here was my opening. I would counter her fluidity with my storehouses of crystallized intelligence. I was probably never going to be as speedily instinctual as she was. But I could, I thought, go deeper. I could get strategic. I began to watch Daniel King’s analysis of top-level matches on YouTube. She would sometimes wander in and try to follow along, but I noticed she would quickly get bored or lost (and, admittedly, I sometimes did as well) as he explained how some obscure variation had “put more tension in the position” or “contributed to an imbalance on the queen-side.” And I could simply put in more effort. My daughter was no more a young chess prodigy than I was a middle-aged one; if there was any inherited genius here, after all, it was partially inherited from me. Sheer effort would tilt the scales.
While not a longread in quite the way the others are, the piece that has probably dug its way deepest into my brain is this list of mental models from Gabriel Weinberg, Founder & CEO of the search engine DuckDuckGo. He was inspired to write his mental models down because of something Charlie Munger, Warren Buffet’s business partner, said about them: “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.” I’ve been pretty obsessed with this idea myself because I think we (as in people who talk about business) often over-emphasize case studies and specific stories, while under-emphasizing the model that can help someone make a decision that can lead to a similar outcome. I’ve been keeping my own list of models since I read this and might share them some time down the road.
[Last Taboo – Wesley Morris – New York Times Magazine – October 27, 2016]
Most of the year-end lists I looked at included ESPN’s Tiger Woods profile as their top sports story of the year and it’s pretty hard to deny it. It’s engaging and breaks one of the crazier stories of the year: That Tiger Woods undoing may have been, at least in part, a result of his obsession with the Navy SEALs.
One way I judge writing is to see how it lodges itself in my brain. I know something was particularly good when I find myself thinking and talking about it for weeks and months afterwords. Sometimes the best writing doesn’t hit you right away, it takes some time to percolate. This Aeon piece on how our brains process information happens to be one of those. It argues that our theory that the brain operates like a computer has led us down a path of research that has set back our understanding of the brain. We’ve got a long history of understanding our brains through the lens of the latest tech it turns out:
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
Three weeks earlier, Simon had released a new album, “Stranger to Stranger,” with its cover taken from a portrait that Close painted of the musician a few years back. Then, the day before I saw Close, Simon announced that the album would be his last. “I called him up, and I said, ‘Artists don’t retire,’ ” Close told me. “I think I talked him out of it. I said: ‘Don’t deny yourself this late stage, because the late stage can be very interesting. You know everybody hated late de Kooning, but it turned out to be great stuff. Late Picasso, nobody liked it, and it turned out to be great.’ ” Close reminded Simon that Matisse was unable to continue painting late in life. “Had Matisse not done the cutouts, we would not know who he was,” Close said. “Paul said, ‘I don’t have any ideas.’ I said: ‘Well, of course you don’t have any ideas. Sitting around waiting for an idea is the worst thing you can do. All ideas come out of the work itself.’ ”
He pointed out that Simon is 74, the same age he was early last summer. “I told him, ‘When you get to be my age, you’ll see,’ ” he said with a laugh.
As everyone now knows the UK voted to leave the European Union today. I happen to be in London this week and so have been paying close attention to the vote and having many conversations with family, friends, and colleagues about how it came to this and what it means for the future. I’m no economist or pundit, so I’ll leave those takes to the professionals, but I wanted to take a minute to share a few thoughts on the obvious parallels between what’s happening here in the UK and with Trump in the US.
We all, of course, have our own notions of what real America looks like. Those notions might be based on our own nostalgia or our hopes for the future. If your image of the real America is a small town, you might be thinking of an America that no longer exists. I used the same method to measure which places in America today are most similar demographically to America in 1950, when the country was much whiter, younger and less-educated than today. Of course, nearly every place in the U.S. today looks more like 2014 America than 1950 America. But the large metros that today come closest to looking like 1950 America are Lancaster, Pennsylvania; Ogden and Provo, in Utah; and several in the Midwest and South.
Normal America, the article explains, is actually best represented (by similarity to American population across “age, educational attainment, and race and ethnicity”) in cities like New Haven, Connecticut or Tampa, Florida. That means when people say that politicians or elites are out of touch with normal America, it may be true, but that’s not because normal America is still small-town America. We are a more diverse, older, and more educated country than we were 50 years ago.
But for many non-whites, the pattern [not very concerned about the present, pessimistic about the future] is the opposite: They are concerned about the present but optimistic about the future. In the Pew poll, Hispanics were sober about their immediate financial circumstances — 40 percent said their finances were in good shape, compared with 43 percent for the public at large — but they see brighter days ahead. More than 70 percent expect their children to be better off than they are. Previous polls have found similar results for other minority groups: According to 2014 data from the General Social Survey, three-quarters of blacks and Hispanics expect their children to enjoy a higher standard of living than they do, compared to just half of whites. A poll commissioned by The Atlantic last fall found that blacks, Hispanics and Asians were far more likely than whites to report that “the American Dream is alive and well.”
Put those things together and what you get is clear: “Make America Great Again” actually means make America look more like it did in 1960. The problem, of course, is that America was a pretty bad place for a lot of Americans at that point (women, minorities, and LGBT to name a few). But most people don’t remember that, because nostalgia is broken and doesn’t work that way. From a 2013 New York Times article on nostalgia:
Happy memories also need to be put in context. I have interviewed many white people who have fond memories of their lives in the 1950s and early 1960s. The ones who never cross-examined those memories to get at the complexities were the ones most hostile to the civil rights and the women’s movements, which they saw as destroying the harmonious world they remembered.
But others could see that their own good experiences were in some ways dependent on unjust social arrangements, or on bad experiences for others. Some white people recognized that their happy memories of childhood included a black housekeeper who was always available to them because she couldn’t be available to her children.
Put it all together and you have a confluence of circumstances that tells a pretty good story for how both the US and the UK have gotten to now and what it really means to make a country great again. Of course, like others, I don’t have answers of how to combat this, but understanding what we’re up against is the first step.
I believe this marks two weeks of blog posts for me, which is a pretty major milestone. In celebration I’m taking the day off and instead sharing our new product video from Percolate. I’ve spent the last four years working with an incredible group of folks building out something that I’m very proud. This video does a really nice job not just showing that off, but also speaking to the Percolate brand.
Yesterday James, my co-founder at Percolate, sent me over a really interesting nugget about how Apple structures its company about 35 minutes into this Critical Path podcast. Essentially Horace (from Asymco) argues that Apple’s non-cross-functional structure actually allows it to innovate and execute far better than a company structured in a more traditional, non-functional, way. As opposed to most other companies where managers are encourages to pick up experience across the enterprise, Apple encourages (or forces), people to stay in their role for the entirety of their career. On top of that, roles are not horizontal by product (head of iPhone) and instead are vertical by discipline (design, operations, technologies) and also quite siloed. He goes on to say that the only parallel he could think of is the military, who basically operates that way. (I know I haven’t done the best job articulating it, that’s because as I listen again I don’t necessarily think the thesis is articulated all that well.)
Below is my response back to James:
While I totally agree with what he says about the structure (that they’re organized functionally and it works for them), I’m not sure you can just conclude that’s ideal or drives innovation. The requirement of an org structure like that is that all vision/innovation comes from the top and moves down through the organization. That’s fine when you have someone like Jobs in charge, but it’s questionable what happens when he leaves (or when this first generation he brought up leaves maybe). Look at what happened when Jobs left the first time as evidence for how they lost their way. Apple is a fairly unique org in that it has a very limited number of SKUs and, from everything we’ve heard, Jobs was the person driving most/all.
My question back to Horace would be what will Apple look like in 20 years. IBM and GE are 3x older than Apple is and part of how they’ve survived, I’d say, is that they’ve built the responsibility of innovation into a bit more of a cross-functional discipline + centralized R&D. I don’t know if it matters, but if I was making a 50 year bet on a company I’d pick GE over Apple and part of it is that org structure and its ability to retain knowledge.
Military is actually a perfect example: Look at the struggles they’ve had over the last 20 years as the enemy stopped being similarly structured organizations and moved to being loosely connected networks. History has shown us over and over centralized organizations struggle with decentralized enemies. Now the good news for Apple is that everyone else is pretty much playing the same highly organized and very predictable game (with the exception of Google, who is in a functionally different business and Samsung, who because of their manufacturing resources and Asian heritage exist in a little bit of a different world).
Again, in a 10 year race Apple wins with a structure like this. But in a 50 year race, in which your visionary leader is unlikely to still be manning the helm, I think it brings up a whole lot of questions.
It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”
All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.
So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.
What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.
Broadly, the line between advertising, marketing, branding, and communications has always been a blurry one. Depending on who you talk to they have a very different definition. For the purposes of the quote, let’s assume when Stephens was talking about advertising he was specifically referring to the buying of media space across platforms like television, magazines, and websites.
With that as the working definition, there are lots of complicated reasons big companies advertise their products. Here are a few:
Distributors love advertising: If you’re a CPG company you advertise as much for the supermarkets as your do for your product. The more money you spend the better spot they’re willing to give you on the shelf (the thought being that people will be looking for your product). I don’t think there is anyone out there that would argue shelf placement doesn’t matter. At the end of the day supermarkets are your customer if you’re a CPG company, so keeping them happy is a pretty high-priority job.
Advertising is good at making people think you’re bigger than you are: Sometimes a company or brands wants to “play above its weight,” making people think they’re bigger than they’re actually are. When we see something on TV or in print, we mostly assume there is a big corporation behind it. Sometimes that’s more important than actually selling the product.
Sometimes you’re not selling a product at all: There are many companies who advertise for reasons wholly disconnected from their product. GE, for example, isn’t running TV commercials about wind turbines to solely try to communicate with the thousands of people who are potentially in the market for a multi-million dollar purchase. A part of why they do it is to communicate with the public at large who is both a major shareholder for the company and also the end consumer of many of their products (many planes we fly on run GE engines and our electricity probably wouldn’t reach our house without GE products). How remarkable their products are has no bearing in this case, since we would never actually be in the market for the vast majority of the things they produce.
Broadly, though, the point I’m trying to make is that while many write off advertising as having no purpose (or being “a tax”), it’s just not true. What’s more, as advertising becomes a more seamless part of the process of being a brand in social, I think this will only become more true. If you see a piece of content performing well on Twitter or Facebook why would you not pay to promote that content and see it reach an audience beyond the core? At that point you’ve eliminated the biggest challenge traditionally associated with advertising (spending tons of money to produce something and having no idea whether it will actually have an effect on people). Seems to me if you’re not willing to entertain the idea you’re just standing on principle.
It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.
Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.
Second issue came in a paragraph about the scalability of content for brands:
Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.
Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.
Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.
Coming back from the Brooklyn Home Depot today I went to look up the word collision. My mom, who I was in the car with, mentioned it looked funny spelled (correctly) on a sign and we were checking that it was actually “LL” and not “SS”. I Googled it and found out it was correct, but it was the second result that caught my eye for the 1960 New York mid-air collision. I had never heard of it and neither had my dad, who grew up in the city (I’m assuming it only turned up because I was driving through Park Slope at the time).
After it started raining I decided to redesign my blog. There wasn’t much reason other than looking for a fun project to work on and finding the old version increasingly tough on the eyes (plus terrible on the phone). The new version is simpler, responsive for mobile, and has bigger fonts (for whatever that’s worth).
I’d like to say this means I’m going to write more, but that doesn’t seem all that likely. I mean I’ll do my best (and have the last few days), but it’s amazing how often life gets in the way of blogging. One of the amazing things about RSS feeds and email subscriptions, though, is that it doesn’t really matter how frequently I actually update this thing because you’ll hear about it. For what it’s worth I’ve also got a Twitter feed for new posts from the blog at @NoahBrier.
One of the things I’ve been thinking about lately is that it feels like there’s a big opportunity for blogs again. While everything has gotten shorter, it’s left a pretty wide door open for folks who want to write thoughtful stuff. I think it’s why we’ve seen thoughtful bloggers ascend quickly (someone like Horace at Asymco comes to mind). Again, not sure that means I’ll write more, but it certainly feels like a good time to be doing so.
When I first learned to write code a few years ago I taught myself PHP. I still contend that was/is the very best choice for someone just starting out as it offers the lowest barrier to entry in making things happen on the web. Between WAMP/MAMP and the fact that most vanilla webhosts support PHP by default, it gives someone just coming into building applications a very simple tool to get started with.
This answer is not the most popular with engineers, who (sometimes fairly) see weaknesses and sloppiness in PHP. The counterpoint I offer is that I’m not suggested it’s a good language, but rather someone who is just getting started needs as few barriers as possible to getting something up and running. PHP I still contend, is the best tool for that job.
The problem was, all that work was leading me to shy away from writing code when I had an idea. Instead of spending time writing code I knew I’d spend it setting up servers and the such. I tried AWS and even Heroku, and both still left me with what felt like an imbalance between setup and coding. What I started to realize is that as someone who doesn’t write code every day, I want tools that optimize the amount of time I actually spend writing code. That, after all, is what I enjoy. (I’m not sure if I’ve ever written it here, but the feeling I get writing code is pretty unique to any of the other work I’ve done. There’s a beauty in the simplicity of code: While something can always be more optimized or elegant, at the end of the day when it works, it works, and when it doesn’t, it tells you why.)
Anyway, though I can’t remember how I got started, I discovered Google AppEngine about six months ago and it’s been a total revelation for me. All of a sudden I’m excited to take idea to Sublime and get busy because I know that I’ll waste 0 time doing anything I don’t want. Google handles data storage, queuing, routing, and pretty much anything else I ever need and while there are certainly limitations (mostly around package management), the pros outweigh the cons by a huge amount.
About two months ago I thought it might be fun to try teaching an introduction to Python class using AppEngine. It would give me a chance to continue to test my theory that the best way to teach people to write code was to start them with GET/POST and, thanks to AppEngine, getting started and getting deployed would be as easy as clicking the buttons in their little OS X app. I made a little repository that I shared with the Percolators who took that class a few months and I thought it might be worth sharing that with everyone else. It’s nothing fancy, but it’s got the basics of GET, POST, URL routing, and using the data store. Ideally it’s a nice little intro to writing code on the web. So, if you’re new to AppEngine, Python, or code in general, here’s how to get started:
Open AppEngine locally and File > Add Existing Application, then Browse and add the folder you just downloaded.
Hit Run in AppEngine and then Browse, which will open your site (running on your local server) in your browser.
From there open up the files in your favorite editor (I prefer Sublime) and start playing around. Don’t worry, you can’t really break anything and, when you do, Python will tell you exactly what you did wrong (to the line of code).
That’s it. Good luck, enjoy, and let me know how it goes.
Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.
As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:
Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.
Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.
As people are wont to do, they freaked out. In fact, they freaked out enough that the University eventually decided to drop the new logo. Now, with the controversy in the rearview mirror, I’ve read/listened to a few post-mortems on how and why something like this happened and I felt like chiming in. My credentials, like most commenters, are pretty thin, but I think they give me an interesting perspective. Beyond spending a sort of ridiculous amount of time thinking about brands, overseeing a product team including three designers and previously working in advertising overseeing creative teams for some time, I also built Brand Tags, the largest free database of perceptions about brands. I am, however, not a designer.
That last bit, especially, shapes my perception on conversations about design.
Okay, with disclosures behind us, a bit more background: When this new logo was introduced to the public (though apparently it had been on a roadshow for some time before it showed up on the web), it was misinterpreted as a replacement for the official seal of the University of California system. That seal looks like this:
This, apparently, was inaccurate. The new logo would not be replacing the seal, but rather helping to unify the various logos that had popped up across the different UC schools (the script Cal and UCLA logos are two examples). As occasionally happens the digerati spread an idea that wasn’t true. I know this isn’t shocking, but to be fair to all the bloggers on this one, the University hardly helped its case when it produced this video as a companion piece to explain the new identity:
“Designers too often judge logos separate from their system…without understanding that one can’t function without the other,” criticized Paula Scher when I asked her views on the controversy, “It’s the kit of parts that creates a contemporary visual language and makes an identity recognizable, not just the logo. But often the debate centers on whether or not someone likes the form of the logo, or whether the kerning is right.” While acknowledging that all details are important, Scher also calls these quibbles “silly.” “No designer on the outside of the organization at hand is really qualified to render an informed opinion about a massive identity system until it’s been around and in practice for about a year,” she explains, “One has to observe it functioning in every form of media to determine the entire effect. This [was] especially true in the UC case.”
Which I mostly agree with. Logos don’t exist outside the system (for the most part) and, even more importantly, they don’t exist outside the collective consciousness they grow up in. This is something I got in quite a few arguments about while I was running Brand Tags. I would get an email from a company no one had ever heard of asking for me to post their logo, to which I invariably responded “no”. My reasoning, as I explained at the time, was that the point of the site was to measure brand perception and for people to have a perception, you need a brand, which you don’t have if no one knows who you are. Brands, as I’ve expressed in the past, live in people’s heads. They are the sum total of perceptions about them.
This is part of what makes it so tough to judge any sort of logo: Lack of context. Even if you see the way the system works, you don’t have the rest of the context that would come with experiencing it in the wild. If you’re a high school senior and the new UC logo is on a sweatshirt worn by the girl you had a crush on that’s home for her freshman Christmas break it’s going to have a very different meaning than if you’re first encounter is in the US News & World Reports list of top US universities. Context shapes experience and we can’t forget that.
Which makes something Simmons writes later so confusing for me:
Design as a discipline is challenged by this notion of democracy, particularly in a viral age. We have become a culture mistrustful of expertise—in particular creative expertise. I share [UC Creative Director] Correa’s fear that this cultural position stifles design as designers increasingly lose ownership of the discourse. “If deep knowledge in these fields is weighed against the “likes” and “tastes” of the populace at large,” she warns, “We will create a climate that does not encourage visual or aesthetic exploration, play or inventiveness, since the new is often soundly refused.”
Most of the article, actually, is blaming the public (and designers specifically) for the way they misinterpreted and criticized the logo. That truth, however, is at least in part due to the context they experienced the logo in. It’s near impossible, for instance, to not walk away from that introductory video believing that the logo is replacing the seal and that was produced by the University itself. Design, I’d posit, is about far more than the logo or even the system, it’s the story that exists around the brand as a whole and the designer is, at least in part, responsible for how that story is told. I agree with part of what’s written above: Design is a tough discipline because everyone has an opinion. But that’s not really new and it’s been lamented to death. People know what fonts are and many have heard of kerning or played with Photoshop. This is just the reality we live in. We can choose to ignore that reality and think we can put things out in the world without hearing from many people who are “unqualified” to have opinions or we can acknowledge that and try to spend as much time thinking about the context people are first experiencing new identities as we spend on the identities themselves. It’s not a simple solution, but it’s a whole lot more sustainable.
Finally, we need to recognize that with this new world we all live in, where everyone has an opinion about everything (let’s not pretend that design is the only victim to this reality), that its going to be harder than ever to stand behind convictions. On the one hand this can mean “a climate that does not encourage visual or aesthetic exploration, play or inventiveness,” as the UC Creative Director says, or it can mean that we need to do more to educate everyone involved in the decision-making process of what’s to come. We need to help them understand the design process, the effect of context and the potential for backlash (with our plan on how to deal with it).
One additional note before I start my list: To make this process slightly more simple next year I’ve decided to start a Twitter feed that pulls from my Instapaper and Readability favorites. You can find it at @HeyItsInstafavs. Okay, onto the list.
Raise the Crime Rate (n+1): This article couldn’t be more different than the first. Rather than narrative non-fiction, this is an interesting, and well-presented, arguments towards abolishing the prison system. The basic thesis of the piece is that we’ve made a terrible ethical decision in the US to offload crime from our cities to our prisions, where we let people get raped and stabbed with little-to-no recourse. The solution presented is to abolish the prison system (while also increasing capital punishment). Rare is an article that you don’t necessarily agree with, but walk away talking and thinking about. That’s why this piece made my list. I read it again last week and still don’t know where I stand, but I know it’s worthy of reading and thinking about. (While I was trying to get through my Instapaper backlog I also came across this Atul Gawande piece from 2009 on solitary confinement and its effects on humans.)
Open Your Mouth & You’re Dead (Outside): A look at the totally insane “sport” of freediving, where athletes swim hundreds of feet underwater on a single breath (and often come back to the surface passed out). This is scary and crazy and exciting and that’s reason enough to read something, right?
Jerry Seinfeld Intends to Die Standing Up (New York Times): I’ve been meaning to write about this but haven’t had a chance yet. Last year HBO had this amazing special called Talking Funny in which Ricky Gervais, Chris Rock, Louis CK and Jerry Seinfeld sit around and chat about what it’s like to be the four funniest men in the world. The format was amazing: Take the four people who are at the top of their profession and see what happens. But what was especially interesting, to me at least, was the deference the other three showed to Seinfeld. I knew he was accomplished, but I didn’t know that he commanded the sort of respect amongst his peers that he does. Well, this Times article expands on that special and explains what makes Seinfeld such a unique comedian and such a careful crafter of jokes. (For more Seinfeld stuff make sure to check out his new online video series, Comedians in Cars Getting Coffee, which is just that.)
The Malice at the Palace (Grantland): I would say as a publication Grantland outperformed just about every other site on the web this year and so this pick is part acknowledgement of that and part praise for a pretty amazing piece of reporting (I guess you could call an oral history that, right?). Anyway, this particular oral history is about the giant fight that broke out in Detroit at a Pacers v. Pistons game that spilled into a fight between the Pistons and the Detroit fans. It was an ugly mark for basketball and an incredibly memorable (and insane) TV event. As a sort of aside on this, I’ve been casually reading Bill Simmons’ Book of Basketball and in it he obviously talks about this game/fight. In fact, he calls it one of his six biggest TV moments, which he judges using the following criteria: “How you know an event qualifies: Will you always remember where you watched it? (Check.) Did you know history was being made? (Check.) Would you have fought anyone who tried to change the channel? (Check.) Did your head start to ache after a while? (Check.) Did your stomach feel funny? (Check.) Did you end up watching about four hours too long? (Check.) Were there a few ‘can you believe this’–type phone calls along the way? (Check.) Did you say ‘I can’t believe this’ at least fifty times?” I agree with that.
And, like last year, there are a few that were great but didn’t make the cut. Here’s two more:
Snow Fall (New York Times): Everyone is going crazy about this because of the crazy multimedia experience that went along with it, but I actually bought the Kindle single and read it in plain old black and white and it was still pretty amazing. Also, John Branch deserves to be on this list because he wrote something that would have made my list last year had it not come out in December: Punched Out is the amazing and sad story of Derek Boogaard and what it’s like to be a hockey enforcer.
Marathon Man (New Yorker): A very odd, but intriguing, “expose” on a dentist who liked to chat at marathons.
Before I left for my trip to Asia I went to see Zero Dark Thirty, the movie about the hunt for, and ultimately killing of, Osama Bin Laden. Before, and after, seeing it I had read quite a bit about the raid, the movie and the controversy around both. I thought maybe it would be worth collecting all this stuff into a post, so that’s what I’m doing.
My guess is that much of the fascination with this film is inspired by the unveiling of facts, unclearly seen. There isn’t a whole lot of plot — basically, just that Maya thinks she is right, and she is. The back story is that Bigelow has become a modern-day directorial heroine, which may be why this film is winning even more praise than her masterful Oscar-winner “The Hurt Locker.” That was a film firmly founded on plot, character and actors whose personalities and motivations became well-known to the audience. Its performances are razor-sharp and detailed, the acting restrained, the timing perfect.
In comparison, “Zero Dark Thirty” is a slam-bang action picture, depending on Maya’s inspiration. One problem may be that Maya turns out to be correct, with a long, steady build-up depriving the climax of much of its impact and providing mostly irony. Do we want to know more about Osama bin Laden and al Qaida and the history and political grievances behind them? Yes, but that’s not how things turned out. Sorry, but there you have it.
One thing that I found particularly interesting in the film was the very short sequence on the doctor who had gone around Abbottabad under the cover of vaccination who was actually collecting DNA. I remembered reading about him in the original New Yorker account of the raid and thought that had made clear he had been successful in collected DNA evidence (it turns out the article says he wasn’t, the same way it’s presented in the film). January’s GQ has a longer account of what happened to the doctor who helped the CIA and tries to get at whether he was successful in his mission. (The answers: He was tortured/imprisioned by the Pakistani government for assisting the Americans and, as to whether he got evidence, it’s still unclear.)
If you’re interested in more reading on the subject, No Easy Day, an account by a Navy Seal on the mission is a fast and interesting read. And although I haven’t read it, my friend Colin Nagy highly recommends The Triple Agent, which covers what happened at Khost, where a Jordanian triple agent beat CIA intelligence and security to bomb a military base and kill a sizable group of CIA operatives (there’s a scene in Zero Dark Thirty about it, though the film offers no real depth on what happened).
My sister sent me this link to the ten best Muppet Christmas moments and it was conspicuously missing my all-time favorite Muppet moment from A Muppet Family Christmas. All the Muppets turn up at Fozzie’s mom’s house for Christmas even though Doc (from Fraggle Rock) was renting it as a quiet escape. As Bert and Ernie come in this conversation happens between the three of them:
Ernie: Oh, hi there, we’re Ernie and Bert. Doc: Well, hi there yourself, I’m Doc. Bert: Oh, did you know that Doc starts with the letter D? Doc: Why, yes. Ernie: Yes! Yes, starts with the letter Y. Doc: True. Ernie: And true starts with the letter T. Doc: What is this? Bert: Where we come from this is small talk.
Just got back from a few days in London and there were two random thoughts I’ve wanted to share. Neither are new, but they popped into my head during this trip and I thought, “maybe I should blog about those,” so here we are.
Thing # 1: We all know they drive on the left side of the road in the UK. This isn’t surprising anymore. What is surprising, to me at least, is every time you encounter a situation where pedestrian traffic is routed to the right. For instance, on all the escalators in the tubes it tells you to stand to the right and pass on the left. This is what we do in the US which makes it seem very wrong in the UK. Also, when you walk the streets in New York it’s a fairly standard rule that traffic stays right. In the UK I feel like you constantly see people on both sides of the sidewalk walking both directions. All of this makes me think that people naturally want to stay to the right (probably because most are right-handed). I have no idea whether this is true or not (I’m also not sure whether British folks will find this offensive, in which case I apologize). I just think you’ve got to pick one and stick to it. You wouldn’t find a random escalator or walkway in a high-traffic zone in the US where there are signs directing traffic to stay left.
Thing # 2: One of the things I really like about London is how much ground floor commercial space there is. In New York City the ground floor is almost entirely retail and office work happens somewhere between the 2nd and 100th floor. I’m not sure why I like looking in at people working, but there’s something really interesting about walking past an office window during the day. It’s just not a view you really get in New York. (I’d say this has something to do with the fact that we’re looking for a new office so I’m especially keen to see how other’s deal with their space, but this has fascinated me since well before I started a company.)
After last year’s NBA playoffs I got really into the NBA. I attribute it to two big things: First, the busier I am at work the more I want to just go home and veg out and the NBA makes it easy with things to watch every night and second, this season (and last year’s playoffs) is just good basketball.
Anyway, there’s a movement in the NBA (and every sports league at this point) about “advanced metrics”. It’s each league’s attempt to apply Moneyball principles to their sport. In basketball a big part of the point of these type of metrics is to answer the question of how much points are really worth. This is because the public gives an outsized amount of attention to guys that score a lot and not to how they actually get their scoring done (in other words, is someone who scores 30 points on 10 of 15 shooting better than someone who scores 40 points on 15 of 35 shooting). (If you’re bored of this now you can drop off, I won’t be offended.)
A site I enjoyed called The NBA Geek put together a nice primer on this question (and the point of advanced metrics generally). The point he makes is that each missed shot has a price and we need to take that into account in the same way we count the made ones. Regardless of the method of counting you use, you’ve got to be able to accept that basic idea. He sums it up like this:
But one thing is clear, to me at least: just because a player has great talent and is clearly capable of creating easy scoring opportunities, this does not make their bad shots “valuable”. The simple fact is, Carmelo Anthony would be a more productive player if he simply stopped taking shit shots; so would Russell Westbrook. The idea that the bad shots that these players take create value for their team has no basis in evidence at all (nor is there any evidence that these players are reluctant shooters who are shooting so much because “someone has to take the shots”). You can choose to disagree with me on that, but it’s rather like disagreeing with me about evolution and creationism — as far as I’m concerned, prove it or move it.
I know I post these every so often, but today we announced that we’ve raised a $9 million Series A. This is a big number and what it means most is that Percolate is very much hiring. We’re pretty much hiring across the board, but here’s a quick rundown of the current open positions on the site:
Account Executive: This is the title we have for our more senior sellers. The job is about getting in front of Fortune 500 brands and helping them understand the value of Percolate.
Engineer: We’re hiring for both Jr. & Sr. engineers (as well as frontend). We are a technology company first-and-foremost and hiring the best engineers is part of what we need to do to succeed.
Designer: We have a top-notch design team here and really believe that the product is dependent on keeping that quality as high as possible.
For some reason the article focused on how sponsors can affect the behavior of the athletes. This is sort of interesting, but pretty far from the real story of NASCAR sponsorships. While the business of NASCAR is struggling for a bunch of reasons (financial meltdown, arms race in technology raising the cost of fielding competitive teams, more competition than ever for ad dollars), what makes it work has not changed. When a brand buys into a NASCAR sponsorship (which goes for ~$20 million for a full season), they are buying two big things: Loyalty and activation opportunities.
Let’s start with loyalty. This is what the article really misses. When brands sponsor NASCAR they get a real understanding from the fans that they are responsible for the car on the track. The drivers get it, the teams get and the fans get it. This is hugely different from slapping your logo on something (whether it’s soccer where it’s displayed in giant form on the player’s belly or basketball, where they seem to be thinking about some little sponsorship patch). People in those sports think the sponsor is responsible for the team in the same way no one will ever walk into a Brooklyn Nets game and say “thank you Barclays for making this possible.”
The numbers in NASCAR back this up. I used to have them, but the league and teams generally trot around a number of 80%+ loyalty of a fan to its driver’s sponsor. If Jimmie Johnson is your guy you go to Lowe’s not Home Depot. That’s just how it works.
Okay, onto activation. Take a look at the official sponsors of NASCAR teams and you see a few different kinds of companies: Car-related companies (NAPA, Shell, Mobil 1), CPG (Budweiser, Mars, Miller Lite) and a lot of retail/franchise businesses (Burger King, Target, GEICO, Farmers Insurance, Home Depot, Lowes, Office Depot). The first set is obvious, the average NASCAR fan likes cars and car-related stuff. The second is about audience as NASCAR skews heavily male and sometimes guys are hard to reach. The last, though, is the most interesting to me.
What all these companies have in common is lots of employees (you could throw FedEx in this group too and UPS was a long-time sponsor of the sport). One of the more interesting things about how brands actually utilize their sponsorship is that they do fully integrated program where they use a sponsorship to reach not just consumers, but also employees. Target, Home Depot and Lowes have 900,000 combined employees (365, 331 and 204). That’s a lot of people to keep happy. One of the ways they do it is give them something to root for. It’s not shocking, or even all that interesting, it just sort of makes sense and means that the investment is offset into a few different departments.
Anyway, I don’t have a real conclusion to all this, just felt like writing a little bit about what I know about NASCAR. Hopefully it was relatively interesting.
I wrote this magazine piece back in 2009 when I was first delving into privacy issues in the digital age. It was published in 2010 in the Assembly Journal. However, a Twitter user recently pointed out to me that the piece is no longer online… which is rather sad for a piece about online privacy. “Confessions of an Online Stalker” was the headline my editors chose. I would have named it “Confessions of a Digital Lurker.” Here it is in all of its dated glory.
At the time I actually wrote a response to her piece which was also published in the magazine, and thus is also now missing from the web. Since Kashmir, the author, has reposted her piece I thought it might also be a good idea to repost my response:
The last issue of the magazine featured a piece titled Confessions of an Online Stalker. Its author, Kashmir Hill, “stalked” me, collecting all the information publicly available on the web about my life and presenting me with my dossier over a cup of coffee in Soho. Included were some basic facts (age and address), interests (most-listened to songs and books on my Amazon wish-list) and the occasional tidbit that was unknown to me (the value of my parents’ house, for instance).
When I was asked to write a response, I wasn’t sure one was warranted. The article actually captures my reaction fairly well. I wasn’t all that surprised about any of the information the author dug up, as I could identify the source of almost all her data points. And while it certainly is a bit uncomfortable to see them (or hear them) together, given the motive of the exercise, it was not all that frightening. But there is a bit of context I’d like to add: it’s the sort of story that raw data doesn’t always tell.
I work and live on the web. I play with just about every new site I can get my hands on and post a fair amount of information that I don’t consider to be particularly personal about myself. I started a blog six years ago because I was writing for a magazine and found I had more to say than could fit in my 2,500-word monthly limit. I explored the medium and posted things that I now look back on and smack myself in the head over because of their asininity. But back then, as well as now, my job was to understand, or at least to have an opinion on, the state of digital media, on how and why people use the web.
But all of that sounds much more clinical than the reality of the situation. It’s been my opinion for some time that by putting things out into the world for public view, I’ve made my life more interesting (mostly by the friends that content has connected me to). In fact, I met my wife because of my blog. Let me explain.
On July 12, 2006 I wrote an entry asking if anyone from my blog world wanted to meet up in New York and have coffee. I got one response from a guy named Piers who ran (and still runs) a trend blog called PSFK. From there we developed an idea for a coffee meetup we decided to call likemind. About a month later, after holding two likeminds, a blogger in London named Russell Davies wrote a post praising the idea. In the comments to that post, a woman named Johanna mentioned that she was moving to New York City and was excited to go to likemind. Attached to her comment was her url, which I followed to an email address that I used to welcome her to the city and invite her to likemind. Three months later, when I was on the hunt for a new job, I mentioned it to Johanna, who had since moved north, attended a few likeminds and become a friend. She suggested that I come speak to the folks at the company she worked for: Naked Communications, a marketing strategy firm that was started in London. I went for it and two months later (it’s February, 2007 at this point) I announced I was joining the company as a strategist. I became friends with, and later started dating, Leila Fernandes, another strategist at the company. Two months ago we were married in Queens. Johanna helped us celebrate.
All of that is a long way of saying I see a lot of value in the sharing of information online. I am not in the camp that believes technology is pulling us apart, but rather that it offers us never-before-possible opportunities to come together and meet people you’d otherwise never have a chance to meet. I also don’t reside on the side that argues privacy is dead. While the author was able to collect a lot of information on me, there wasn’t much in there I hadn’t chosen to post myself with an understanding of the implications (not to mention the vast majority of it could have been collected in the pre-web days, albeit in a much more time-consuming manner).
Privacy isn’t a technological binary that you turn off and on. Privacy is about having control of a situation. It’s about controlling what information flows where and adjusting measures of trust when things flow in unexpected ways. It’s about creating certainty so that we can act appropriately. People still care about privacy because they care about control. Sure, many teens repeatedly tell me “public by default, private when necessary” but this doesn’t suggest that privacy is declining; it suggests that publicity has value and, more importantly, that folks are very conscious about when something is private and want it to remain so. When the default is private, you have to think about making something public. When the default is public, you become very aware of privacy. And thus, I would suspect, people are more conscious of privacy now than ever. Because not everyone wants to share everything to everyone else all the time.
The control Boyd was referring to is probably slightly easier for me than most. When something happens like Facebook’s latest changes to their privacy settings, about thirty of the hundreds of blogs and other new sources I subscribe to write in-depth stories on the implications. Within hours of the changes I had been to the new settings page and tweaked everything to my liking, including deciding to keep certain information out of the public eye. I recognize this is not the norm, but it’s this kind of awareness that shapes my views on the sharing of information.
At the end of the day a breach of privacy requires some reasonable expectation that something would be kept private. Not only did I not have that expectation, but for much of the information I put on the web I hope for exactly the opposite.
[Editor’s Note: I try not to do these often, but since lots of you are from in and around the marketing industry I thought I’d post this job here as well.]
We’re hiring a brand strategist at Percolate (amongst other positions). The role isn’t to be a planner in the way you would be in an agency, but rather to take those same skills and help onboard clients, help them understand content opportunities/how to use Percolate best and help build out products that can help systematize parts of the brand’s content strategy. Basically we’re looking for someone who really understands how brands work, isn’t afraid to go in front of a client and present and has a mind for making products (which is essentially about looking at what you’re doing by hand and thinking about how to translate that into something that can be done repeatedly by computers).
This is a pretty good job for someone who has worked at an agency and wants to go try something different. I don’t want someone so senior that they’ve forgotten how to dig in and actually do work (not that there’s anything wrong with that, but we’ve all run into those folks and they’re not so helpful to have around). It’s a fulltime gig. I’d say the salary is mid-level, but it also includes equity (like all jobs at Percolate).
While I’m here and talking about jobs I should also mention that we’re looking for a few other positions as well and if you recommend someone for any of these and they get hired I’ll buy you an iPad (this is a NoahBrier.com offer only, so make sure you mention it):
Backend developer: If you know someone who writes good code we want to talk to them. We do our stuff in Python, but if they’re awesome we’ll talk.
Sales: We’re looking for people who can go in and help us tell the story of Percolate and really help us sell. We’re building an awesome team and a great culture around sales. I need to write a whole blog post about this, but watching the sales team build out their processes is a pretty amazing thing.
Felix, as is frequently the case, disagrees: “Lehrer shouldn’t shut down Frontal Cortex; he should simply change it to become a real blog. And if he does that, he’s likely to find that blogs in fact are wonderful tools for generating ideas, rather than being places where your precious store of ideas gets used up in record-quick time.” What’s more, he dives in on a few suggestions for what to do with the blog and in turn makes some really interesting comments about blogging generally. I especially like his first point:
Firstly, think of it as reading, rather than writing. Lehrer is a wide-ranging polymath: he is sent, and stumbles across, all manner of interesting things every day. Right now, I suspect, he files those things away somewhere and wonders whether one day he might be able to use them for another Big Idea piece. Make the blog the place where you file them away. Those posts can be much shorter than the things Lehrer’s writing right now: basically, just an excited “hey look at this”, with maybe a short description of why it’s interesting. It’s OK if the meat of what you’re blogging is elsewhere, rather than on your own blog. In fact, that’s kind of the whole point.
I always thought of this blog as a thing I use to think out loud. It doesn’t overwhelm me because it helps me think through ideas (and in turn create new ones).
This is a cross-post from the Percolate blog. I try not to do this too often, but when it seems like it will be worth sharing I’ll go for it. If it’s annoying let me know and I’ll stop.
We talk about the idea that you must consume content to create content a lot around here, and I wanted to share a little anecdote that I’ve been using in presentations lately.
When Twitter first launched the big joke was that it was a place where people shared what they had for breakfast. Twitter fought tooth and nail against this idea, trying to explain that the service was actually much more serious than that.
But it’s not.
And that’s not a bad thing.
The way I see it, Twitter is just a big platform of what we had for breakfast. Except it’s not food, it’s what we ate on the web. A large proportion of Tweets have a link in them and those links are to whatever that person consumed moments before. It might be a Huffington Post article for breakfast or a YouTube video for lunch, but it’s still just what we ate. We are turning consumption into production.
My friend Grant McCracken wrote about social as exhaust data a few years ago and I think that’s a really nice way to think about it. Essentially what we’re seeing is a digested view into the lives of people and (increasingly) brands. Their social footprint is just that: a footprint. It’s the thing they leave behind after they take a step.
In lieu of actually writing something interesting (which I haven’t done in a while), I’ve decided to release a 70% done project. It’s called Brand Tags and the idea is simple: You tag brands with the first thing that comes to mind. The idea came to me as I was working on my Brand vs. Utility presentation a few months ago. The thinking went something like this: If brands exist as the sum of all thoughts in someone’s head, then if you ask a bunch of people what a brand is and make a tag cloud, you should have a pretty accurate look at what the brand represents.
What happened after was all a bit of a whirlwind. There are about 30 comments on that post and the experiment ended up getting a lot of press (including an NPR interview, which is still the coolest media moment I’ve ever had). It was exciting and amazing and taught me lots about building a product and how people think about brands.
Two years later things had died down pretty significantly, partly out of my own interest waning and partly out of my inability to keep the scale of responses high without a steady supply of press. At that time I was approached by Ari Jacoby, who was working on a new company called Solve Media, which asked consumers to type in a brand message instead of a bunch of squigly letters in a CAPTCHA. Solve was interested in buying brand tags and was excited about offering up the tagging input across the web as part of it’s CAPTCHA program. I was excited to see my baby get a new life (and, obviously, to also get some money for what I had built).
We struck a deal and I became a shareholder in Solve. I also got some more confidence in my bank account, eventually leading me to make the leap to startup life and Percolate.
Enter Solve, which took some time to think about the best implementation of Brand Tags and then started building up the database of brand descriptions by rolling out this type of Captcha to 0.25% of its Captcha inventory — enough to generate tens of thousands of user-generated responses about a brand a day, Mr. Jacoby said. “We can get an unusually sample size overnight,” he said. The premise of Brand Tags is that a consumer’s perception of a brand is in fact reality, and one that could help measure the effectiveness of brand advertising online.
From our e-mail providers to our mobile-phone carriers, most companies’ business models are too lucrative to risk by mishandling our personal information and angering the consumer. So it is safe to say that despite the many potential risks represented by the volumes of data available, our past is relatively well safeguarded.
Many economists regard brands as a good thing, however. A brand provides a guarantee of reliability and quality. Consumer trust is the basis of all brand values. So companies that own the brands have an immense incentive to work to retain that trust. Brands have value only where consumers have choice. The arrival of foreign brands, and the emergence of domestic brands, in former communist and other poorer countries points to an increase in competition from which consumers gain. Because a strong brand often requires expensive advertising and good marketing, it can raise both price and barriers to entry. But not to insurperable levels: brands fade as tastes change; if quality is not maintained, neither is the brand.
A brand is a promise: The more valuable it is, the less a company can afford it to be broken.
I wonder, though, whether that’s as true now as it was in earlier times. The example I’ve heard most for thinking of brands in this is not killing your customers. You pay more for a Pepsi than some random house brand because you know it won’t be poisoned (you also know it will always taste the same). But something seems to be changing, especially with digital brands. Maybe it’s that there’s more of them or maybe we have far lower expectations, but I feel like large brands frequently have data breaches or other terrible things and we forgive them in a way that doesn’t really jibe with the two paragraphs above.
If we don’t hold our brands responsible, the very meaning of brand changes. Part of it is that it’s easier to show outrage than it ever was, so when people get up in arms about Facebook’s latest privacy change I suspect it’s not real. Part of it may be the insanity of the news cycle: TJ Maxx loses millions of credit cards and its only a big deal for a day. But none of it explains how a bunch of banks that nearly sunk the economy are able to bounce back (except, maybe, regular brand laws don’t apply to oligopolies).
No matter what, something is different and its important that we understand what it means.
Gamification is awful for many reasons, not least in the way it seeks to transform us into atomized laboratory rats, reduce us to the sum total of our incentivized behaviors. But it also increases the pressure to make all game playing occur within spaces subject to capture; it seeks to supply the incentives to make games not about relaxation and escape and social connection but about data generation. The networked mediation of games — in other words, playing them on your phone or through Facebook — undermines the function of games in organizing face-to-face social time, guaranteeing presence in an unobtrusive way. Instead we typically take our turn in mediated games on our time and play multiple games at once, to cater to our convenience and our desire to be winning at least one of them.
Which reminded me a lot of this article from late last year about Cow Clicker, a satire of games like Farmville that against the designer Ian Bogost’s hopes actually became popular itself. Here’s how Cow Clicker worked:
The rules were simple to the point of absurdity: There was a picture of a cow, which players were allowed to click once every six hours. Each time they did, they received one point, called a click. Players could invite as many as eight friends to join their “pasture”; whenever anyone within the pasture clicked their cow, they all received a click. A leaderboard tracked the game’s most prodigious clickers. Players could purchase in-game currency, called mooney, which they could use to buy more cows or circumvent the time restriction. In true FarmVille fashion, whenever a player clicked a cow, an announcement—”I’m clicking a cow“—appeared on their Facebook newsfeed.
And what happened next:
And then something surprising happened: Cow Clicker caught fire. The inherent virality of the game mechanics Bogost had mimicked, combined with the publicity, helped spread it well beyond its initial audience of game-industry insiders. Bogost watched in surprise and with a bit of alarm as the number of players grew consistently, from 5,000 soon after launch to 20,000 a few weeks later and then to 50,000 by early September. And not all of those people appeared to be in on the joke. The game received its fair share of five-star and one-star reviews from players who, respectively, appreciated the gag or simply thought the game was stupid. But what was startling was the occasional middling review from someone who treated Cow Clicker not as an acid commentary but as just another social game. “OK, not great though,” one earnest example read.
As a developer whose independent success has emancipated him from the grip of the monolithic game corporations, Blow makes a habit of lobbing rhetorical hand grenades at the industry. He has famously branded so-called social games like FarmVille “evil” because their whole raison d’être is to maximize corporate profits by getting players to check in obsessively and buy useless in-game items. (In one talk, Blow managed to compare FarmVille’s developers to muggers, alcoholic-enablers, Bernie Madoff, and brain-colonizing ant parasites.) Once, during an online discussion about the virtues of short game-playing experiences, Blow wrote, “Gamers seem to praise games for being addicting, but doesn’t that feel a bit like Stockholm syndrome?” His entire public demeanor forms a challenge to the genre’s intellectual laziness.
Now I’m not sure how I feel about any of this really. I’ve found myself trapped by games, unable to put down the controller until my hands were so sore I was worried about doing permanent damage. I’m not proud of the fact that I was totally obsessed with Ski Safari (I’ve almost broken the habit). I think it’s good that there is another side to the endless games are great conversation (other than the side that says the people who talk about gamification are dumb). Not sure I have more of an answer than that at the moment.
One more thing from the article about Blow before I’m done. I particularly liked this explanation of how video games are really like movies. We frequently talk about how when a new medium is created the first thing people try to do is recreate the old medium. It’s logical and the examples people trot out (first TV broadcast was radio in front of the camera), it’s never really well explained. Thought this was pretty good:
Blow’s refusal to explain the meaning of his games, after all, stems from a profound respect for his art. Ever since modern technology first made sophisticated video games possible, developers have assumed that the artistic fate of the video game is to become “film with interactivity”—game-play interwoven with scenes based on the vernacular of movies. And not just any movies. “The de facto reference for a video game is a shitty action movie,” Blow said during a conversation in Chris Hecker’s dining room one sunny afternoon. “You’re not trying to make a game like Citizen Kane; you’re trying to make Bad Boys 2.” But questions of movie taste notwithstanding, the notion that gaming would even attempt to ape film troubles Blow. As Hecker explained it: “Look, film didn’t get to be film by trying to be theater. First, they had to figure out the things they could do that theater couldn’t, like moving the camera around and editing out of sequence—and only then did film come into its own.” This was why Citizen Kane did so much to put filmmaking on the map: not simply because it was well made, but because it provided a rich experience that no other medium before it could have provided.
I’ll leave you with that. Lots of thoughts about video games. No answers.
Yesterday I wrote about David Grann’s amazing New Yorker essay on William Morgan, an American revolutionary in Cuba. While I was reading I remembered thinking to myself, “that’s a great sentence, I should blog that,” but then I couldn’t find it again when I finished (I should have just underlined it in the magazine). Anyway, it came back to me last night and I wrote myself a note (only to not be able to find that … can’t figure out which of my three different self-organization systems I sent it to). On rummaging around I just found it agin. (Italics are mine to denote the sentence I’m particularly fond of.)
Hoover and his men tried to detect a hidden design in the data they were collecting. They were witnessing history without the clarity of hindsight or narrative, and it was like peering through a windshield lashed with rain. As Hoover confronted the gaps in his knowledge, he became more and more obsessed with Morgan. A former fire-eater at the circus! Hoover hounded his evidence men to “expedite” their inquiries, homing in on Morgan’s ties to Dominick Bartone. The mobster, whom the bureau classified as “armed and dangerous,” had recently been arrested with his associates at Miami International Airport, where they had been caught loading a plane with thousands of pounds of weapons—a shipment apparently destined for mercenaries and Cuban exiles being trained in the Dominican Republic.
Was poking around my Kindle highlights (looking to see if there was a way to export them easily) and I ran across a quote from Zlatan Ibrahimovic’s biography “I Am Zlatan”. I was going to post that and then I thought, maybe I should just post lots of sports stuff in one big post, so that’s what I’m doing. No rhyme or reason here, just some interesting sports-related stuff I’ve run into lately.
First the quote from Zlatan on a player’s relationship with their team:
The management owned my flesh and bones, in a sense. A footballer at my level is a bit like an orange. The club squeezes it until there’s no juice left, and then it’s time to sell the guy on. That might sound harsh, but that’s how it is. It’s part of the game. We’re owned by the club, and we’re not there to improve our health; we’re there to win, and sometimes even the doctors don’t know where they stand. Should they view the players as patients or as products in the team? After all, they’re not working in a general hospital, they’re part of the team. And then you’ve got yourself. You can speak up. You can even scream, this isn’t working. I’m in too much pain. Nobody knows your body better than you yourself.
A superstar gives your team a five point edge being on the court. With this scale in hand let’s point something out. LeBron James has played 10 playoff games so far this season. In 4 of them, he’s put up a PoP of +10!
I haven’t written a ton about starting Percolate, partly because I don’t want this to become a place where I just promote what I’m up to and partly because I’ve been so busy I haven’t had a lot of time to write (as I’m guessing you’ve noticed).
Well, now I’m on a train and I forgot my Verizon card at my last meeting and I decided it would be a good chance to get some things down. These are a bunch of random thoughts, as much for my own safekeeping as sharing.
Before I start, a bit of an update on Percolate: We have 15 people, our own office and a healthy roster of Fortune 500 clients. James (my co-founder) and I started the company last January (2011). Alright, onto the thoughts …
One of the funny things about starting a company (and growing it) is the milestones you set for yourself (or discover as you go). There’s the obvious ones (first employee, first client, first check in the bank), but then there’s the less obvious ones like first office (alright, maybe that’s an obvious one) and first employee who relocated to come work for you (we passed that one recently). Every time we hit one of these it’s a moment to reflect and think about how crazy the whole process of starting a company really is.
I’ve written this before, but it bears repeating. I can’t imagine EVER starting a company without a co-founder. I can’t recommend it highly enough to anyone thinking about being an entrepreneur. As far as choosing your co-founder I think there are a bunch of factors that has led to a really strong relationship between James and myself, including: A lot of respect for each other, clear roles (but also enough respect that when we move outside those roles it’s accepted) and an ability to disagree and be stronger for it (I wrote a short post about this but I think it’s hugely important, if you can’t argue productively with your co-founder, you shouldn’t start a company with them). There are lots of others, but those top my list.
There is a fundamental difference between being a person running a company and being an employee. As the one in charge your singular goal is to keep the company evolving (at least it’s true of a technology startup). Stasis equals death. You want your company to look totally different tomorrow than it does today. If you’re an emplooyee, you often want the opposite: You like where you came to work and you want that company to stay the same. I’m not sure how to resolve this disconnect and I never recognized it until starting Percolate.
Recruiting, Marketing & Press
All three of these happen all the time. They don’t ever stop and we’re going to make sure they remain that way even when the team performing these roles moves past just James and myself.
A Little Disagree Is a Good Thing
Teams shouldn’t always agree about everything. Having different perspectives is ultimately what’s going to force things to be stronger. Understanding the roles different folks on the team play (and helping them understand those roles) is really important.
I never did a whole lot of managing before I got to Percolate. I thought it was pretty fine to let people do their job and support them when they needed it. James introduced a bunch of ideas to me around being more active and it’s a strategy we’ve been trying to live as much as possible at Percolate. We set quarterly goals with each employee and meet at the end of the three months to grade them together. We have weekly meetings and do monthly surveys of employee satisfaction. None of this stuff is perfect and hopefully it will all evolve (especially as we continue to grow), but it has really helped me understand the value of a more active management approach.
I’m sure there’s lots more, but that’s what’s coming to mind right now. Hope this is somewhat helpful/interesting.
This post is the intersection of a few different things I’ve been thinking about lately. First is Percolate. Part of the process of introducing the company to new people is frequently recounting the story of where the product came from. James and I have probably sent each other a thousand different articles back and forth and I asked him recently for his list of top articles that really inspired his thinking in the space. The second thing is Robin Sloan’s Fish which is all about the difference between liking and loving content. It made me think about the list of the content and marketing-related articles I’ve read that I come back to frequently. This is that list. Some of these are newer and may not hold the test of time, but most of them are things I’ve come back to (at least in conversation) about once a month since I’ve read them (they are distributed over the last 10 years).
Without any further ado, here’s my list:
Stock & Flow
Not specifically about marketing, but it’s all about content. Stock and flow is how we’ve taken to thinking about content at Percolate and this is really where that idea came from. I’ve written a fewthings inspired by the idea and use it frequently to explain how brands should think about content (and why Percolate exists).
Many Lightweight Interactions
This is the most recent article of the bunch and comes by way of Paul Adams, who works in the product team at Facebook. It was a really nice way to explain a lot of the stuff I’ve been thinking and talking about with clients over the last five years. Specifically it talks about how the web (and specifically social) offer brands an opportunity to move from a world of few heavyweight interactions (stock in Robin’s parlance) to many lightweight interactions (flow). The one thing I’d add is that I think the real opportunity is to take the many lightweight interactions and use them to understand what works and inform the occasional heavyweight interactions brands need to succeed.
Who’s the Boss?
This was written by a friend of mine 10 years ago. It’s short, but the core point is that brand’s live in people’s heads. This was what inspired Brand Tags and has colored lots of my thinking about how brands behave.
Why Gawker is Moving Beyond the Blog
Not specifically about marketing, but Denton’s explanation of why he’s moved from the classic blog format is a great explanation of how content works on the web.
How Social Networks Work
Another slightly older one, this was the first time I had read someone talked about the idea of social as exhaust data (basically our digital breadcrumbs), which seemed like a really good way to think about it (and helped explain why brands struggled). Lately I’ve been using this to help explain why brands struggle in social: Exhaust data is a very human thing. You need to consume in order to create this trail and most brands don’t do that.
How Owned Media Changed the Game
From Ted McConnel who used to be head of digital at P&G. I really liked this quote: “Recently, in a room full of advertising brain trustees, one executive said, ‘The ‘new creative’ might be an ecosystem of content.’ Brilliant. The brand lives in the connections, the juxtapositions, the inferences, the feeling of reciprocity.” This was one of those articles that really wrapped up a bunch of stuff I had been thinking about. It’s nice when that happens.
That’s it for me. What would you add? What am I forgetting?
This is a cross-post from the Percolate Blog. I thought you all might enjoy reading it here as well.
Let me get something out of the way before we get started: In case you haven’t heard, Facebook is going to IPO this week.
Okay, seriously, all this IPO talk has driven people to dive into Facebook’s business model and lots of folks are coming up with doubts. As Peter Kafka points out, even Facebook has its doubts, mentioning as much in their IPO filing: “We believe that most advertisers are still learning and experimenting with the best ways to leverage Facebook to create more social and valuable ads.”
While I don’t know the precise answers to those questions, I do have lots of opinions and since it happens to be Internet Week in NYC, I’ve been having these conversations a lot (mostly on panels). The bulk of the argument against Facebook revolves around their lack of “intent” data. This, of course, is what Google has in bulk and is the reason they are a multi-billion dollar business. Being able to target people at specific points in the purchase process changes the way marketing works. It allows advertisers to do something that was all but impossible (you could buy in-store and outdoor around stores, but that’s a whole lot less efficient). This is an amazing thing for marketers and Google’s market cap reflects it.
But if you ask most advertisers why they spend millions (and sometimes billions) on traditional ads, it’s not to harvest people who intend to buy, it’s to create demand: continuing to grow a business requires continuing to bring in new customers constantly. However it makes you feel, most ads exist to remind you that you need something new. That shoe company with billboard isn’t trying to get you to buy their shoes over a competitor, they’re trying to remind you that you need new shoes and, they hope, when you walk into the store you’ll spring for their brand.
That’s where brands spend real dollars. When startups show off “the chart” (you know, the one with the gap on time spent versus ad spend), they are looking at the effect of digital platforms not having a good answer to intent creation.
That, I believe, is where the opportunity for social is. We’re not there yet, but the promise is that you can use your understanding of a user’s interests to present them with messages that let them know about things they want before they want them. If Facebook figures this out it will be a bigger company than Google.
So how does content fit in?
Using the traditional purchase funnel, I think you still have a gap between awareness and intent. Once someone knows about your brand or product, how do you create need? One really good way of doing that is to remind them you exist (a large portion of CPG ad spend is used for just this). The way to remind people you exist is to create content they’ll see. To create content they’ll see on Facebook you need to a) be engaging enough that it builds organic activity and pushes beyond the base distribution you get through EdgeRank or b) buy Reach Generator. The two big goals (awareness and intent creation) have paid actions associated with them in Facebook, Twitter and Tumblr. If these companies continue to build on these ideas and find better ways to target users based on their interests they will be solving a real problem for advertisers, something that hasn’t really been done on the web since paid search in the early 2000s.
Of course, there are lots of ifs here. The products are not quite there yet (targeting, for instance, is still largely based on social connections instead of interest connections), but I think these platforms will get there and I think they’ll succeed.
First, let’s just get clear on the terminology here: “Curation” is an act performed by people with PhDs in art history; the business in which we’re all engaged when we’re tossing links around on the internet is simple “sharing.” And some of us are very good at that! (At least if we accept “very good” to mean “has a large audience.”)
Like any appropriated buzzword, the term “curation” has become nearly vacant of meaning. But, until we come up with a better one, it remains the semantic placeholder that best captures the central paradigm of Twitter as a conduit of discovery and direction for what is meaningful, interesting and relevant in the world.
I loved the idea of a semantic placeholder then, and I still do. If you’re going to wade into the semantic debate you need a better answer and editor isn’t it. For better or worse we are using curator to mean something different than it used to mean and, at least for now, that seems fine. As long as we all know what we’re talking about (the selection of internet things) then the word seems okay, let’s not hide behind the definition.
And before I continue, one more thing: For what it’s worth I define curation as people choosing things and aggregation as computers choosing things.
Great. Now back to the more important stuff. A lot of this conversation was kicked off by the Curator’s Code, which aims to encourage people to share the source of their information with some special symbols. Lots of folks, including Marco from Instapaper jumped on the idea as stupid and unsustainable and maybe it is. I think everyone involved would agree it’s not the perfect solution to the problem, but I do think it opened up an important conversation (I wasn’t involved, but I know the folks who are). How we credit one another on the web is an issue we’ve been working on forever and, as a few of the blog posts on the topic point out, the good news is that the hyperlink is the most efficient:
And we already have a tool for providing credit to the original source: It’s called the hyperlink. Plenty of people don’t use the hyperlink as much as they should (including mainstream media sources such as the New York Times, although Executive Editor Jill Abramson said at SXSW that this is going to change) while others misuse and abuse them. But used properly, they serve the purpose of providing credit quite well. How to use them properly, of course — especially for journalistic purposes — is another whole can of worms, as Felix Salmon of Reuters and others have noted. And when it comes to curation and aggregation, it seems as though curation is what people call it when they like it, and aggregation is what they call it when they don’t.
But it’s not quite good enough, and this is where I start to take issue with a few different things a few different people said. What I just did there is use a hyperlink to credit something I didn’t write. Except you probably didn’t mouse over the hyperlink and because it was in there I didn’t need to write that Matthew Ingram from GigaOm was responsible for those sentences. While I think it’s important to credit sources of information, I think the bigger thing to think about is how we’re crediting the original sources of content.
Which is why I took the most issue with Marco’s stance. Not because I disagreed with him (“The proper place for ethics and codes is in ensuring that a reasonable number of people go to the source instead of just reading your rehash.”), but because Instapaper represents one of the current dangers in lack of credit. While it doesn’t relate exactly to the question the Curator’s Code is addressing, it is part of the broader conversation we should be having: Who is getting credit when you consume a great piece of content?
After a long argument with Thierry Blancpain on Twitter I finally came to the question which seems to sit at the heart of the matter to me: Who gets credit when you read something awesome in Instapaper? Does it go to the publisher of the content or does it go to Instapaper. I know for myself (and the informal poll of friends I asked the question to), the answer is the latter. I don’t know the source of most of the content I consume in Instapaper. Sure I put it there when I hit the button, but when I consume it the source is entirely stripped away. I was talking to the publisher of a major magazine this week about the issue and the question I asked is, “if you’re losing the advertising and the branding, is there any purpose to letting your content live there?”
This isn’t to point the finger solely at Instapaper, I think this is true of almost all the platforms on the web. If all the incentive is towards sharing and all the credit goes to sharers, what will happen to creation? (I don’t really think it will go away, but I do think it creates a dangerous precedent.) One of the things I think is great about the Longform iPad app is that it connects me with the publishers of content. One day when they offer subscriptions (which I assume they will) I’d happily pay to keep getting my 3,000 word Grantland stories as I now know the true value (and I never forget it, because the publisher is always right next to the content). (Admittedly, the curators on the app pose a more complicated issue.)
I think part of it is that publishers are going to have to start carrying more branding in the stories. I’m not sure what this means, but if you’re reading something from The Atlantic, say, maybe they remind you throughout that this is from The Atlantic. It’s not ideal, but again, I think if publishers aren’t getting advertising revenue or branding credit with their stories there is no reason for them to support their travels around the web. I also think metadata comes into play, and while I don’t know what the best answer is quite yet, I think it’s important to start encouraging the display of more information about original sources on stories (again, not sure what that looks like, but I’ve been turning it over in my head).
This whole issue is obviously something I’m thinking a lot about at Percolate. I believe brands should be the best behaved of the bunch. I also believe brands have a responsibility to be both curators and creators: To increase the pool of original quality content on the web. No one is to blame for all this stuff, but we are all responsible to make sure that it’s solved before it’s too late.
When Piers Fawkes and I started likemind a few years ago it was on a bit of a whim, but it came to represent something we both really believed in: Starting to take online relationships offline. likemind was a place where people from the internet could meet and share ideas over a cup of coffee. As crazy as it is, that was five years ago and obviously quite a bit has happened in the meantime.
Over the last year or so likemind has lapsed a bit. Like any community it requires tending and life got in the way. But looking back, there was something bigger: There was just less need for a shared space for meeting internet people. Not that it isn’t important, but rather that it feels like everywhere is now that place.
So Piers and I sat down and talked about what happens next. What does the next five years of likemind look like? Is there even a next five years of likemind?
To answer the second question first: Yes, we both believe in the power of likemind. While there are many places to meet folks, we need more that aren’t explicitly about networking and instead are just about the sharing of ideas between interesting people. The thing that’s amazing about likemind is its self-selection. While we never defined what it meant, we always got the right people, all around the world.
So with that decided we talked about what we felt was missing and the answer we landed on was intersections. While there are countless meetups and conferences around the world, there are two few that do everything they can to bring people from different places together for conversations. Specifically, the the industries that we focus on – technology, creativity, media – seem to have diffused at exactly the time they should be coming together. We’re spending more and more time talking to the people who work on the things we work on at exactly the time we should talking to everyone else.
So that’s what we want to make likemind about. Call it likemind: The sequel. The mission is to give people from these different places a space to share ideas. Let’s make it happen.
I’m not actually sure that creating editorial content is all that different than creating promotional content, at least on a high level. Advertising is a process of combining brand outputs (look, feel, voice) with cultural inputs (insights, trends, etc.) and creating a piece of communication. The shift I see taking place is that the traditional processes around creating content for a world of campaigns break-down in a real-time content creation environment: Brands and agencies aren’t currently set up to consume culture as it happens, which is what media organizations do. I think this is a big shift we’ll start to see inside brands over the coming years. It’s not that they’ll try to model themselves on media organizations, but rather, they’re going to rearrange themselves around real-time consumption of content, data, analytics and anything else they can get their hands on to help make decision and communicate better.