There is an interesting little article on innovation and Picasso over at Medium. Basically it suggests that radical innovation happens when the market is most receptive to it:
Sgourev’s analysis of Cubism suggests that having an exceptional idea isn’t enough: if it is to catch fire, the market conditions have to be right. That’s a question of luck and timing as much as it is of genius. The closest modern analogy to Picasso’s Paris is Silicon Valley in the early days of the dotcom boom, with art dealers as venture capitalists and entrepreneurs as artists.
This reminded me a lot of Duncan Watts’ research on influence on the web, where he concluded, “large scale changes in public opinion are not driven by highly influential people who influence everyone else, but rather by easily influenced people, influencing other easily influenced people.” In fact, Watts also used a fire to explain the dynamic in his conclusion:
Some forest fires, for examples, are many times larger than average; yet no-one would claim that the size of a forest fire can be in any way attributed to the exceptional properties of the spark that ignited it, or the size of the tree that was the first to burn. Major forest fires require a conspiracy of wind, temperature, low humidity, and combustible fuel that extends over large tracts of land. Just as for large cascades in social influence networks, when the right global combination of conditions exists, any spark will do; and when it does not, none will suffice.
The challenge, of course, as Watts points out in his research, is that consistently finding and predicting this environment is all but impossible. We may understand some of the factors, but the situation is just too complex to be anywhere near accurate. As much as we give credit to innovators who capture those radical moments, we also need to appreciate the role of luck in their success.
Good short article from Wages of Wins on the economics of doping in sports. On the A-Rod situation:
You may find yourself arguing: isn’t it costly for a player to sit out the games? If A-Rod is denied the 2014 season, he will give up some income, right? True–he might. But, the decision to break the rules and take the banned substances is really made based on the player’s expected benefits weighed against the expectedcosts. Nobel Prize winning economist Gary Becker introduced this principle in his paper Crime and Punishment: An Economic Approach (1965). The expected costs are equal to the penalty (i.e., the game suspension or ban) multiplied by the probability of getting caught and the probability of being punished (having the penalty applied). So, even if the 2014 ban holds, A-Rod will still have three years on his contract at $61 million (plus incentives for various homerun milestones)! From his public comments one gathers A-Rod is not expecting the penalty to be applied in full. So, no matter how you slice it up, A-Rod’s behavior–though illegal–was rational economically speaking. And, that is why tomorrow’s PED headline will be old news.
Yesterday James, my co-founder at Percolate, sent me over a really interesting nugget about how Apple structures its company about 35 minutes into this Critical Path podcast. Essentially Horace (from Asymco) argues that Apple’s non-cross-functional structure actually allows it to innovate and execute far better than a company structured in a more traditional, non-functional, way. As opposed to most other companies where managers are encourages to pick up experience across the enterprise, Apple encourages (or forces), people to stay in their role for the entirety of their career. On top of that, roles are not horizontal by product (head of iPhone) and instead are vertical by discipline (design, operations, technologies) and also quite siloed. He goes on to say that the only parallel he could think of is the military, who basically operates that way. (I know I haven’t done the best job articulating it, that’s because as I listen again I don’t necessarily think the thesis is articulated all that well.)
Below is my response back to James:
While I totally agree with what he says about the structure (that they’re organized functionally and it works for them), I’m not sure you can just conclude that’s ideal or drives innovation. The requirement of an org structure like that is that all vision/innovation comes from the top and moves down through the organization. That’s fine when you have someone like Jobs in charge, but it’s questionable what happens when he leaves (or when this first generation he brought up leaves maybe). Look at what happened when Jobs left the first time as evidence for how they lost their way. Apple is a fairly unique org in that it has a very limited number of SKUs and, from everything we’ve heard, Jobs was the person driving most/all.
My question back to Horace would be what will Apple look like in 20 years. IBM and GE are 3x older than Apple is and part of how they’ve survived, I’d say, is that they’ve built the responsibility of innovation into a bit more of a cross-functional discipline + centralized R&D. I don’t know if it matters, but if I was making a 50 year bet on a company I’d pick GE over Apple and part of it is that org structure and its ability to retain knowledge.
Military is actually a perfect example: Look at the struggles they’ve had over the last 20 years as the enemy stopped being similarly structured organizations and moved to being loosely connected networks. History has shown us over and over centralized organizations struggle with decentralized enemies. Now the good news for Apple is that everyone else is pretty much playing the same highly organized and very predictable game (with the exception of Google, who is in a functionally different business and Samsung, who because of their manufacturing resources and Asian heritage exist in a little bit of a different world).
Again, in a 10 year race Apple wins with a structure like this. But in a 50 year race, in which your visionary leader is unlikely to still be manning the helm, I think it brings up a whole lot of questions.
I’m a sucker for all quotes about how one thing or another was going to ruin society. Most of these are about media, but I couldn’t help myself when I saw this one about curiosity from an article on The American Scholar:
Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.
I actually think it would be pretty great to collect all these in a big book … A paper book, of course.
I really love this quote which came from an article Umberto Eco wrote about Wikileaks by way of this very excellent recap of a talk by the head of technology at the Smithsonian Cooper-Hewitt National Design Museum:
I once had occasion to observe that technology now advances crabwise, i.e. backwards. A century after the wireless telegraph revolutionised communications, the Internet has re-established a telegraph that runs on (telephone) wires. (Analog) video cassettes enabled film buffs to peruse a movie frame by frame, by fast-forwarding and rewinding to lay bare all the secrets of the editing process, but (digital) CDs now only allow us quantum leaps from one chapter to another. High-speed trains take us from Rome to Milan in three hours, but flying there, if you include transfers to and from the airports, takes three and a half hours. So it wouldn’t be extraordinary if politics and communications technologies were to revert to the horse-drawn carriage.
In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:
It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”
All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.
So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.
What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.
I’m sure you’ve all seen this quote. It’s attributed to Robert Stephens, founder of Geek Squad, and goes something like: “Advertising is the tax you pay for being unremarkable.” (I was reminded of it most recently reading Josh Porter’s blog, Bokardo.) It sounds good and, at first blush, correct, but it’s not for lots of reasons.
Broadly, the line between advertising, marketing, branding, and communications has always been a blurry one. Depending on who you talk to they have a very different definition. For the purposes of the quote, let’s assume when Stephens was talking about advertising he was specifically referring to the buying of media space across platforms like television, magazines, and websites.
With that as the working definition, there are lots of complicated reasons big companies advertise their products. Here are a few:
- Distributors love advertising: If you’re a CPG company you advertise as much for the supermarkets as your do for your product. The more money you spend the better spot they’re willing to give you on the shelf (the thought being that people will be looking for your product). I don’t think there is anyone out there that would argue shelf placement doesn’t matter. At the end of the day supermarkets are your customer if you’re a CPG company, so keeping them happy is a pretty high-priority job.
- Advertising is good at making people think you’re bigger than you are: Sometimes a company or brands wants to “play above its weight,” making people think they’re bigger than they’re actually are. When we see something on TV or in print, we mostly assume there is a big corporation behind it. Sometimes that’s more important than actually selling the product.
- Sometimes you’re not selling a product at all: There are many companies who advertise for reasons wholly disconnected from their product. GE, for example, isn’t running TV commercials about wind turbines to solely try to communicate with the thousands of people who are potentially in the market for a multi-million dollar purchase. A part of why they do it is to communicate with the public at large who is both a major shareholder for the company and also the end consumer of many of their products (many planes we fly on run GE engines and our electricity probably wouldn’t reach our house without GE products). How remarkable their products are has no bearing in this case, since we would never actually be in the market for the vast majority of the things they produce.
Broadly, though, the point I’m trying to make is that while many write off advertising as having no purpose (or being “a tax”), it’s just not true. What’s more, as advertising becomes a more seamless part of the process of being a brand in social, I think this will only become more true. If you see a piece of content performing well on Twitter or Facebook why would you not pay to promote that content and see it reach an audience beyond the core? At that point you’ve eliminated the biggest challenge traditionally associated with advertising (spending tons of money to produce something and having no idea whether it will actually have an effect on people). Seems to me if you’re not willing to entertain the idea you’re just standing on principle.
This week’s NYTimes Magazine economics column is all about timesheets. While the whole thing is worth a read, I found the history of timesheets especially interesting:
The notion of charging by units of time was popularized in the 1950s, when the American Bar Association was becoming alarmed that the income of lawyers was falling precipitously behind that of doctors (and, worse, dentists). The A.B.A. published an influential pamphlet, “The 1958 Lawyer and His 1938 Dollar,” which suggested that the industry should eschew fixed-rate fees and replicate the profitable efficiencies of mass-production manufacturing. Factories sold widgets, the idea went, and so lawyers should sell their services in simple, easy-to-manage units. The A.B.A. suggested a unit of time — the hour — which would allow a well-run firm to oversee its staff’s productivity as mechanically as a conveyor belt managed its throughput. This led to generations of junior associates working through the night in hopes of making partner and abusing the next crop. It was adopted by countless other service professionals, including accountants.
In what I assume is a response to this article that was floating around about placebo buttons (buttons that are there to make you feel better, but don’t do anything), William Gibson tweeted this:
I love the internet. That’s all.
I don’t know that I have a lot more to add than what Russell wrote here, but I like the way he described the challenge of describing something that is simultaneously happening the past and present (in this case, describing a soccer replay):
This is normally dismissed as typical footballer ignorance but it’s better understood when you think of a footballer standing infront of a monitor talking you through the goal they’ve just scored. They’re describing something in the past, which also seems to be happening now, which they’ve never seen before. The past and the present are all mushed up – it’s bound to create an odd tense.
I spend a lot of time thinking about building products (and, more specifically, building teams to build products). With that in mind I really enjoyed this rc3.org post about the seven signs of a dysfunctional engineering team, especially this bit about building tools instead of process:
Preference for process over tools. As engineering teams grow, there are many approaches to coordinating people’s work. Most of them are some combination of process and tools. Git is a tool that enables multiple people to work on the same code base efficiently (most of the time). A team may also design a process around Git — avoiding the use of remote branches, only pushing code that’s ready to deploy to the master branch, or requiring people to use local branches for all of their development. Healthy teams generally try to address their scaling problems with tools, not additional process. Processes are hard to turn into habits, hard to teach to new team members, and often evolve too slowly to keep pace with changing circumstances. Ask your interviewers what their release cycle is like. Ask them how many standing meetings they attend. Look at the company’s job listings, are they hiring a scrum master?
One of the thing I try to communicate to the whole company is that it’s everyone’s responsibility to build products. Products are reusable and scalable assets. While process is a product, tools are (almost always) better. I am working on a long and sprawling explanation of all my product thinking stuff, but that will have to wait for another day.
I really like this little post on “Borges and the Sharknado Problem.” The gist:
We can apply the Borgesian insight [why write a book when a short story is equally good for getting your point across] to the problem of Sharknado. Why make a two-hour movie called Sharknado when all you need is the idea of a movie called Sharknado? And perhaps, a two-minute trailer? And given that such a movie is not needed to convey the full brilliance of Sharknado – and it is, indeed, brilliant – why spend two hours watching it when it is, wastefully, made?
On Twitter my friend Ryan Catbird
responded by pointing out that that’s what makes the Modern Seinfeld Twitter account
so magical: They give you the plot in 140 characters and you can easily imagine the episode (and that’s really all you need).
This morning I woke up to this Tweet from my friend Nick:
It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.
Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.
Second issue came in a paragraph about the scalability of content for brands:
Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.
Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.
Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.
One of the podcasts I’ve been enjoying as of late its Tim Harford’s Pop Up Ideas from the BBC. In the latest episode David Kilcullen talks about feral cities (direct MP3 link), which essentially flip the idea of the failed state on its head, suggesting that it’s not the state that fails the city, but rather the city that fails the state (the podcast has a deeper explanation). Here’s a bit more from a short New York Times piece on the idea from a few years ago:
Richard Norton, a Naval War College scholar who has developed a taxonomy of what he calls feral cities, says that there are numerous places slipping toward Mogadishu, perhaps the only fully feral city nowadays. As public services disintegrate, residents are forced to hire private security or pay criminals for protection. The police in Brazil have fallen back on a containment policy against gangs ruling the favelas, while the rich try to stay above the fray, fueling the busiest civilian helicopter traffic in the world (there are 240 helipads in S-o Paulo; there are 10 in New York City). In Johannesburg, much of downtown, including the stock exchange, has been abandoned to squatters and drug gangs. In Mexico City, crime is soaring despite the presence of 91,000 policemen. Karachi, Pakistan, where 40 percent of the population lives in slums, plays host to gangland violence and to Al Qaeda cells.
I like this explanation of the importance of privacy from Glenn Greenwald, who has been the main outlet for all things Snowden:
And let me just say one other thing: sometimes it is hard to convey why privacy is so important, because it’s kind of ethereal. But I think people instinctively understand the reason it’s so important, because they do things like put passwords on their email accounts and locks on their bedroom and bathroom doors, which reflect a desire to keep others out of certain spaces where they can go to be alone. That’s a way of making clear that they value privacy. And the reason privacy is so critical is because it’s only when we know we’re not being watched that we can engage in creativity, or dissent, or pushing the boundaries of what’s deemed acceptable. A society in which people feel like they’re always being watched is one that breeds conformity, because people will avoid doing anything that can prompt judgment or condemnation. This is a crucial part of why a surveillance state is so damaging — it’s why all tyrannies know that watching people is the key to keeping them in line. Because only when you’re not being watched can you really be a free individual.
Clive Thompson, writing about finding the cruise ship that crashed in Italy last year on Google maps (Maps link here), made a really interesting point about how we interpret strange visuals in the age of digital technology and video games:
I remember, back when the catastrophe first occurred, being struck by how uncanny — how almost CGI-like — the pictures of the ship appeared. It looks so wrong, lying there sideways in the shallow waters, that I had a sort of odd, disassociative moment that occurs to me with uncomfortable regularity these days: The picture looks like something I’ve seen in a some dystopic video game, a sort of bleak matte-painting backdrop of the world gone wrong. (In the typically bifurcated moral nature of media, you could regard this either as a condemnation of video games — i.e. they’ve trained me to view real-life tragedy as unreal — or an example of their imaginative force: They’re a place you regularly encounter depictions of the terrible.) At any rate, I think what triggers this is the sheer immensity of the ship; it’s totally out of scale, as in that photo above, taken by Luca Di Ciaccio.
Growing up playing video games I definitely know the feeling. I do wonder, though, whether this is actually a new feeling or we could have said the same about feeling like something was a movie when that was still transforming how we saw the world. When I was in Hong Kong in December, for example, I felt like it was more a reflection of Blade Runner than anything else, for what it’s worth. Either way, though, it’s an interesting notion.
Lifehacker has an answer for why you need to turn your router off for 10 seconds:
A lot of modern technology contains capacitors! These are like energy buckets, little batteries that fill up when you put a current through them, and discharge otherwise. 10 seconds is the time it takes most capacitors to discharge enough for the electronics they’re powering to stop working. That’s why when you turn your PC off at the wall, things like an LED on your motherboard take a few seconds to disappear. You probably could wait a different time, but 10 seconds is the shortest time you can be sure everything’s discharged.
I’ve always wondered about that …
Matthew Yglesias makes a decent argument that Apple Maps, while a terrible product, is succeeding at its intended goal:
To get out of that bind, Apple has never needed to make a product that’s actually superior to Google Maps. What they’ve needed to do is produce an application that clears two bars. One is that it has to be good enough that your typcial doesn’t-care-too-much phone consumer doesn’t reject iOS out of hand. The other is that it has to be good enough such that if Google doesn’t want to lose the entire iOS customer base it has to scramble and release a great Google Maps app for iOS and not just for Android. Apple’s Maps app easily clears both of those bars. Before the release of iOS 6, the inferiority of Apple’s Google-powered iOS Maps app to Android’s Google maps was a real reason to prefer an Android phone. Today, there is no such reason. Not because Apple Maps is as good at Google Maps, but because Google Maps for iOS is as good as Google Maps for Android.
This was actually part of the original Chrome strategy as well. While Google released the product because long-term they couldn’t afford to have their biggest competitor (at the time) controlling the majority of their usage, they also did it to push Internet Explorer to innovate so that Google could deliver a better and faster experience for its customers. By entering the browser market Google was able to light a fire under Microsoft that a company like Firefox never could and the versions of IE that followed were a thousand times better than what had existed before.
I can’t remember exactly where, but right after the DOMA decision I read an article that basically said part of the reason this happened so quickly is that people in political power were able to relate to the plight of LGBT since there is a chance their son or daughter is gay. On the contrary, as the article pointed out, a person in congress is unlikely to have someone poor in their family.
As I read Obama’s comments about the Travyon Martin decision it struck me how interesting it is to have a president who can actually say something like this:
There are, frankly, very few African-American men who haven’t had the experience of walking across the street and hearing the locks click on the doors of cars. That happens to me, at least before I was a senator. There are very few African-Americans who haven’t had the experience of getting on an elevator and a woman clutching her purse nervously and holding her breath until she had a chance to get off. That happens often. And I don’t want to exaggerate this, but those sets of experiences inform how the African-American community interprets what happened one night in Florida.
However you feel about the decision, it seems that the law in Florida favored the last man standing and the jury made a decision that fell squarely in the bounds of the law as it was written. That doesn’t make it any less sad to see what happened or any more right that George Zimmerman decided to move towards a situation that he could have easily walked away from, but it does bring into focus the gap that exists between the people that write laws and the citizens those laws are meant to serve.
Overall, though, this feels like part of larger state of American politics that leaves people feeling shocked, while at the same time struggling to find the any individual situation shocking. I feel the same way about everything have to do with Prism, the NSA program to spy on citizens that we’ve all heard lots about at this point. I’ve been asked what I thought of it a few times and my general reaction has been exactly the same as the Martin case: Shocked, but not shocking. I’m not surprised our government is spying on its citizens and I believe Snowden should be treated as a whistleblower as long as he doesn’t release any details about America’s spying on foreign governments (not that I doubt they are, but I do think that’s a line where things become dangerous).
My big issue with PRISM and the culture around it is that it’s part of a larger move that allows constitutional decisions to be made outside the Supreme Court. As the New York Times reported a few weeks ago:
The rulings [of the secret surveillance court], some nearly 100 pages long, reveal that the court has taken on a much more expansive role by regularly assessing broad constitutional questions and establishing important judicial precedents, with almost no public scrutiny, according to current and former officials familiar with the court’s classified decisions.
I don’t have any problem at all with the government spying on people it thinks are bad guys, I just think it should be done within the framework of the law. For all the flaws of our government, the three-branch system the Constitution laid out is still a pretty good way to make sure no one party can consolidate too much power. What PRISM (and Guantanamo and lots of the other stuff that happened after September 11th) allow for are decisions that happen outside the system, and, judging from the experiences thus far with Guantanamo and PRISM, when that happens some basic Constitutional rights get trampled.
If there’s a bright side to all this it’s that we’re not so deep into this that I don’t think we can turn things around (at least on the PRISM/Guantanamo stuff, Travyon Martin and American political racism is a different story). The reality is that even though the world has certainly gotten more complex, we’re only 12 years into the meat of the movement to erode the system of checks and balances. I hope that the outing of PRISM and, ideally, the closing of Guantanamo will help apply some breaks to that trend. The goal, as odd as it may sounds, is to return to a time when finding out the government is spying on its citizens or throwing people in jail without telling them the charge, will once again be shocking.
This is three years old, but I just ran across it and it’s just as relevant today as it was then. Apparently in response to Nicholas Carr’s book The Shallows, Steven Pinker wrote a great op-ed about how technology isn’t really ruining all that stuff that technology is constantly claimed to be ruining. A snippet:
The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.
Marshall McLuhan is a dense dude. I’ve read a fair amount of his stuff and much of it just doesn’t make sense. I don’t mean to take anything away from him by saying that, I still think he is the smartest thinker on media that I’ve ever read and he basically laid out a blueprint for how to think about the internet, but he’s hard to read. So when I talk about him and his ideas I often end up recommended his book The Medium is the Massage, which is essentially a picture book that explains the core ideas of McLuhan in a fairly interactive way (spoiler alert: for one page you flip the book upside down …). Anyway, it’s worth reading if you’ve been wondering where to start with McLuhan (plus the new version has a sweet cover by Shepard Fairey). And, if you like that, his most famous book, Understanding Media, just came out on Kindle a few weeks ago.
Coming back from the Brooklyn Home Depot today I went to look up the word collision. My mom, who I was in the car with, mentioned it looked funny spelled (correctly) on a sign and we were checking that it was actually “LL” and not “SS”. I Googled it and found out it was correct, but it was the second result that caught my eye for the 1960 New York mid-air collision. I had never heard of it and neither had my dad, who grew up in the city (I’m assuming it only turned up because I was driving through Park Slope at the time).
Anyway, turns out in 1960 two planes collided over Staten Island and one, the larger of the two, was able to continue flying until finally crashing down in Park Slop about 6 blocks from where I live. Scouting New York has an excellent account and follow up with comments by folks who remember the accident that had one survivor: An 11-year-old boy who died the following day.
The Times has an excellent little video with stills and voiceover from the reports of the day. All around a crazy scene.
In case you were on Twitter a few nights ago, there was a show on SyFy called Sharknado. It was, as you might expect, about sharks getting caught in a tornado. If you judged it’s popularity by Twitter alone you would have thought the whole world was watching. That, however, turns out not to be the case:
But Sharknado may have broken the mold; the movie blew up on Twitter last night, giving the impression that everyone with a TV was watching it. “Omg omg OMG #sharknado,” Mia Farrow tweeted last night, while Washington Post political reporter Chris Cillizza joked that he was writing an article about how Sharknado would affect the 2016 elections. But were all these people actually watching? According to the Los Angeles Times, Sharknado was watched by only 1 million people, which makes it a bust, even by Syfy standards. Most Syfy originals have an average viewership of 1.5 million people, with some getting twice that.
[Via Washington Post]
After it started raining I decided to redesign my blog. There wasn’t much reason other than looking for a fun project to work on and finding the old version increasingly tough on the eyes (plus terrible on the phone). The new version is simpler, responsive for mobile, and has bigger fonts (for whatever that’s worth).
I’d like to say this means I’m going to write more, but that doesn’t seem all that likely. I mean I’ll do my best (and have the last few days), but it’s amazing how often life gets in the way of blogging. One of the amazing things about RSS feeds and email subscriptions, though, is that it doesn’t really matter how frequently I actually update this thing because you’ll hear about it. For what it’s worth I’ve also got a Twitter feed for new posts from the blog at @NoahBrier.
One of the things I’ve been thinking about lately is that it feels like there’s a big opportunity for blogs again. While everything has gotten shorter, it’s left a pretty wide door open for folks who want to write thoughtful stuff. I think it’s why we’ve seen thoughtful bloggers ascend quickly (someone like Horace at Asymco comes to mind). Again, not sure that means I’ll write more, but it certainly feels like a good time to be doing so.
Jon Negroni has an amazing unified theory of every Pixar movies. Apparently there are some theories, which he’s building on, that all Pixar movies are actually set in the same universe and, in his theory at least, it’s a massive fight between humans, animals and machines. This is exactly why the internet exists.
Here’s a snippet:
But why would machines want to get rid of humans in the first place? We know that animals don’t like humans because they are polluting the Earth and experimenting on them, but why would the machines have an issue?
Enter Toy Story. Here we see humans using and discarding “objects” that are clearly sentient. Yes, the toys love it Uncle Tom style, but over the course of the Toy Story sequels, we see toys becoming fed up.
[Via Boing Boing]
Really interesting post on the changing nature of photography from Kottke. He pulls together a few different thoughts on photography and basically lands at the idea that we’re moving to a future of after-the-fact photography:
In order to get the jaw-dropping slow-motion footage of great white sharks jumping out of the ocean, the filmmakers for Planet Earth used a high-speed camera with continuous buffering…that is, the camera only kept a few seconds of video at a time and dumped the rest. When the shark jumped, the cameraman would push a button to save the buffer.
Makes me wonder where this new photography will land on the memory versus experience spectrum
(an idea from Daniel Kahneman that we basically optimize our experiences for memory rather than experience, which is why we take photos instead of actually paying attention to what’s going on around us). I wonder if this doesn’t flip that notion.
When I first learned to write code a few years ago I taught myself PHP. I still contend that was/is the very best choice for someone just starting out as it offers the lowest barrier to entry in making things happen on the web. Between WAMP/MAMP and the fact that most vanilla webhosts support PHP by default, it gives someone just coming into building applications a very simple tool to get started with.
This answer is not the most popular with engineers, who (sometimes fairly) see weaknesses and sloppiness in PHP. The counterpoint I offer is that I’m not suggested it’s a good language, but rather someone who is just getting started needs as few barriers as possible to getting something up and running. PHP I still contend, is the best tool for that job.
The problem was, all that work was leading me to shy away from writing code when I had an idea. Instead of spending time writing code I knew I’d spend it setting up servers and the such. I tried AWS and even Heroku, and both still left me with what felt like an imbalance between setup and coding. What I started to realize is that as someone who doesn’t write code every day, I want tools that optimize the amount of time I actually spend writing code. That, after all, is what I enjoy. (I’m not sure if I’ve ever written it here, but the feeling I get writing code is pretty unique to any of the other work I’ve done. There’s a beauty in the simplicity of code: While something can always be more optimized or elegant, at the end of the day when it works, it works, and when it doesn’t, it tells you why.)
Anyway, though I can’t remember how I got started, I discovered Google AppEngine about six months ago and it’s been a total revelation for me. All of a sudden I’m excited to take idea to Sublime and get busy because I know that I’ll waste 0 time doing anything I don’t want. Google handles data storage, queuing, routing, and pretty much anything else I ever need and while there are certainly limitations (mostly around package management), the pros outweigh the cons by a huge amount.
About two months ago I thought it might be fun to try teaching an introduction to Python class using AppEngine. It would give me a chance to continue to test my theory that the best way to teach people to write code was to start them with GET/POST and, thanks to AppEngine, getting started and getting deployed would be as easy as clicking the buttons in their little OS X app. I made a little repository that I shared with the Percolators who took that class a few months and I thought it might be worth sharing that with everyone else. It’s nothing fancy, but it’s got the basics of GET, POST, URL routing, and using the data store. Ideally it’s a nice little intro to writing code on the web. So, if you’re new to AppEngine, Python, or code in general, here’s how to get started:
- Download AppEngine for Python
- Download my intro files from Github
- Open AppEngine locally and File > Add Existing Application, then Browse and add the folder you just downloaded.
- Hit Run in AppEngine and then Browse, which will open your site (running on your local server) in your browser.
- From there open up the files in your favorite editor (I prefer Sublime) and start playing around. Don’t worry, you can’t really break anything and, when you do, Python will tell you exactly what you did wrong (to the line of code).
That’s it. Good luck, enjoy, and let me know how it goes.
Super Mario Brothers is Getting Harder
It may come as a shock to some of you that most gamers today can not finish the original Super Mario Brothers game on the Famicom. We have conducted this test over the past few years to see how difficult we should make our games and have found that the number of people unable to finish the first level is steadily increasing. This year, around 90 percent of the test participants were unable to complete the first level of Super Mario Brothers. We did not assist them in any way except by providing the exact same instruction manual we used back then. Many of them did not read it and the few that did stopped after the first page which did not cover any of the game mechanics.
UPDATE (7/7/13): As Rafi points out in the comments, it looks like this is satirical. One of the other stories on the site is CHILDREN WHO PICK A SIDE IN CONSOLE WARS ARE 90 PERCENT MORE LIKELY TO JOIN A GANG. Sorry about that.
I know we’re long past the NBA finals, but I really liked this quote about what actually wins basketball games:
There is nothing that has ever won a basketball game except for turning possessions into points. Both teams have about the same number of possessions — they alternate all game — and one team will turn those into more points. Halftime speeches, energy drinks, blue-chip college pedigrees … nothing matters unless it makes one team better at turning possessions into points than the other team.
One of the things that drives me a bit nuts is people saying Lebron James doesn’t have the killer instinct or whatever else. At the end of the day this is the thing that matters. Moneyball proved it, Wages of Wins is proving it, it’s hard to deny math.
I wrote a post over at 99U about the levels of designers and how I think you build a great design team. You can read the whole thing there, but here’s a snippet about states:
If level two comes naturally to most designers who have grown up in a digital world, level three most definitely does not. States are about understanding all the different possible outcomes of a given task within a product and being able to design for all of them. Things like errors are obvious, albeit often forgotten, while actions like escapes and backs are much less frequently planned for.
To draw a parallel, a great engineer thinks in states. Before they write a line of code they have come to understand all the different outcomes and use that understanding to design a fault-resistant system. One of my favorite lines from one of our engineers recently was, “when I start typing the work’s done.”
I really liked one of the comments in this NYTimes interview with Roman Stanek, the CEO of GoodData. In response to “Anything you have a particularly low tolerance for in your organization?” Stanek answered, “I have a really low tolerance for people making comments, especially managers, without actually positioning them.” When asked to explain he said:
Somebody might say, for example, that our competition has a new product. But is it good news or bad news? Should we do something about it? I always expect my managers to have an opinion and they should not be just messengers. A manager is not a messenger. I don’t like my managers essentially talking to their people without being able to express their opinion and position what they’re discussing.
This is perfectly articulated and drives me crazy as well. It’s so easy to send emails that people have a tendency to just shoot things off with comment or context. I don’t want to know the news, I want to know what you think about the news and why you decided to send it to me.
I really like situations that help describe the fact that lots of factors ultimately go into the way you feel about a brand/design/marketing. I wrote a bit about how Jony Ive feels about it last week and I thought this was another interesting example from a very different place. In the early 90s a designer named Alexander Juilian was given the opportunity to redesign the UNC Tarheels basketball uniform. He was a huge Tarheels fan and thus felt a ton of pressure to deliver something amazing. Not wanting to leave things to chance, he looped Michael Jordan into the decision (Jordan, at the time, was just starting his ascent to the greatest player in the history of the NBA but he was already UNC royalty). Ultimately Julian sent all the designs to Jordan to let him sign off on his favorite:
“And guess what? As soon as Michael [Jordan] said that [the argyle design was his favorite], then the entire team also liked the argyle best. So we made the first uniform in Michael’s size, sent it to Chicago, he worked out in it, then we sent it down to Chapel Hill. There was near frenzy, I’m told, in the locker room as to who was going to be the first Carolina player to put it on after Michael because they wanted Michael’s mojo. Hubert Davis (photo, above right) won, he was the same size and he was the model. Now he’s a great sportscaster.
Ran across an interesting quote (reportedly) by Jony Ive about the difference between measurable (speed, hard drive size, etc.) attributes and the non-measurable ones:
But there are a lot of product attributes that don’t have those sorts of measures. Product attributes that are more emotive and less tangible. But they’re really important. There’s a lot of stuff that’s really important that you can’t distill down to a number. And I think one of the things with design is that when you look at an object you make many many decisions about it, not consciously, and I think one of the jobs of a designer is that you’re very sensitive to trying to understand what goes on between seeing something and filling out your perception of it. You know we all can look at the same object, but we will all perceive it in a very unique way. It means something different to each of us. Part of the job of a designer is to try to understand what happens between physically seeing something and interpreting it.
I think about this a lot. One of the things that inspired Brand Tags originally was a similar quote from my friend Martin Bihl’s 2002 AdWeek article: “The way I look at it, a brand only exists in the consumer’s mind. That other product isn’t a brand yet because consumers don’t really know about it. It’s still a product.”
I’m playing around with publishing in a few different places these days. Trying out Medium for the first time where I wrote a piece on designing and building for states:
Although I may be bastardizing the term from an engineering point of view, when I talk about states I mean all the possible outcomes of a new feature: What happens when you press this button, or that button, or those buttons together, or we get this data back but not that data. Bugs, for the most part, are a matter of overlooked states. From a design perspective, states are about thinking through all the different ways the elements on the page might live and interact. This includes obvious ones like empty states and error messages as well as not-so-obvious ones like random button combinations or accidental page refreshes.
I wrote a piece over at Forbes.com about some stuff I’ve been thinking about lately, specifically how to start to understand the shifts we’re seeing in social. Here’s the opening two paragraphs:
A few months ago I was asked to put together a presentation about the future of social. As would be expected, I was pretty overwhelmed with the topic and turned it over and over in my head trying to figure out the best way to approach the question. Whenever I find myself in a situation like this I turn to my personal intellectual hero and the person I believe to be the greatest media thinker of the 20th Century, Marshall McLuhan. While he wrote long before the web existed, his theories around how media evolves and interacts with culture are more relevant than they’ve ever been.
At the heart of McLuhan’s theories is his most famous saying: “The medium is the message.” Though like most things McLuhan it requires a fair amount of unpacking, at its core is the idea that we’re affected more by our interactions with the medium itself than we are with the content we experience on it. “The ‘message’ of any medium or technology,” McLuhan explained, “is the change of scale or pace or pattern that it introduces into human affairs.” In his book “Understanding Media” he goes on to give an example: “The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure.” In other words, it realigned personal expectations and culture and expanded the definition of local.
Last week I sent myself an email with this quote from Felix Salmon’s blog post about the media’s response to the Boston mayhem:
There’s an art to working out where to find fast and reliable information, and to judging new information in light of old information, and to judging old information in light of new information. And there’s an art to synthesizing everything you know, from hundreds of different sources, into a single coherent narrative. It’s not easy, it’s not a skill that most people have, and it’s precisely where news organizations add value.
I have been thinking (and talking) a fair amount about media literacy lately and this quote seemed to sum up the challenge really nicely. Media literacy is a very hard thing to nail down because, unlike regular literacy, it’s pretty hard to test for. Ultimately I think it’s about making sure people understand the role of media (whatever that might mean) in the way they experience and interact with the world around them. That can mean simply being aware that the order of the Google results you’re seeing are most likely not the same as mine (even for the same query) to, as Felix says, “working out where to find fast and reliable information.” Media literacy, like regular literacy I guess, is a scale (one with an ever-moving endgame, but I guess the same could be said of language).
Anyway, when I read this quote from Felix I thought of a few different games I like to play with myself that, although I never really thought of them that way, were sort of mini media-literacy tests:
The Snopes Test: When you read something, especially an email from a distant family member, you can immediately sniff out whether you’re going to be able to find an entry for it on Snopes. Knowing whether it’s been proven true or false is good for bonus points.
The GoDaddy Test: This is a funny one, and definitely less useful than the Snopes test, but it’s interesting to predict whether a random domain name someone comes up with is already taken or not.
The Wikipedia Test: This one is actually important I think. If I gave you a random topic, say Snoop Dogg, could you predict whether or not Wikipedia would be the top result? (Bonus points if you predict the actual top result.)
Anyway, all these things help illustrate the challenge with media literacy, which is ultimately it’s an “I know it when I see it” skill. With that said, it’s one that will continue to have a larger and larger impact on culture as more people’s voices (and information) become a part of the news we all sift through on a daily basis.
I Tweeted this, but I thought it was worth sharing (and I’m trying to blog more). From the New England Journal of Medicine (which I grabbed the RSS for years ago and am always excited to run across), a bit of a post-mortem on the medical response to the Boston Marathon bombings. The whole thing is interesting (and very different than most of the stories on the bombing you’ll read), but the most interesting tidbit to me was this:
Although most health care providers in the United States have never treated a bombing victim, lessons learned by military surgeons, emergency physicians, and nurses in Iraq and Afghanistan are progressively percolating through the trauma care community.
I find stories of how new products and technology get adopted quite fascinating. While propaganda is much more associated with politics than brands, there’s a long history of companies using some of the same tactics to sway public opinion in favor of their product. The two examples that come to mind for me are stories like the diamond myth and Listerine’s introduction of halitosis.
Anyway, a podcast I’ve been listening to recently, 99% Invisible, recently covered one of these public relations campaigns that ultimately lead to the acceptance of cars (and invention of “jaywalking”). At the time, in the early 20s, cars were killing lots of people who weren’t used to sharing the streets with them. The car industry had to do something so they pushed a campaign that has now become familiar to us by way of the NRA: Cars don’t kill people, bad drivers kill people. But more interesting, to me at least, was where jaywalking came from:
In the early 20th Century, “jay” was a derogatory term for someone from the countryside. Therefore, a “jaywalker” is someone who walks around the city like a jay, gawking at all the big buildings, and who is oblivious to traffic around him. The term was originally used to disparage those who got in the way of other pedestrians, but Motordom rebranded it as a legal term to mean someone who crossed the street at the wrong place or time.
[Editor’s Note: This turned a bit rambly and I’m definitely out of my zone talking about the law, so feel free to skip if you’re not up for a non-lawyers opinion on the law after reading two articles about it.]
Sorry, but I’ve got some time this morning and, like many of you I’m sure, I’m spending it reading as much as I can about yesterday’s situation in Boston. If you were watching TV while the second suspect, Dzhokhar Tsarnaev, was found or listening later during the press conference, the question of whether he would be/was read his Miranda rights came up. In the moments after the capture there was some confusion, which was eventually cleared up during the press conference when the US Attorney Carmen Ortiz confirmed that he had not been read his Miranda rights under the “public safety” exception. I, like most I’d imagine, had never heard of the public safety exception before yesterday (or spent much time thinking about Miranda rights, to be honest).
Slate had an excellent explanation of what happened and why it’s a dangerous precedent:
And so the FBI will surely ask 19-year-old Tsarnaev anything it sees fit. Not just what law enforcement needs to know to prevent a terrorist threat and keep the public safe but anything else it deemed related to “valuable and timely intelligence.” Couldn’t that be just about anything about Tsarnaev’s life, or his family, given that his alleged accomplice was his older brother (killed in a shootout with police)? There won’t be a public uproar. Whatever the FBI learns will be secret: We won’t know how far the interrogation went. And besides, no one is crying over the rights of the young man who is accused of killing innocent people, helping his brother set off bombs that were loaded to maim, and terrorizing Boston Thursday night and Friday. But the next time you read about an abusive interrogation, or a wrongful conviction that resulted from a false confession, think about why we have Miranda in the first place. It’s to stop law enforcement authorities from committing abuses. Because when they can make their own rules, sometime, somewhere, they inevitably will.
This is one of those things where I don’t know quite how to feel. The FBI has a pretty extensive article on the subject that shed some additional light (I know the FBI is probably not the most balanced outlet for this sort of stuff, but the article is a pretty good and comprehensible look at the history of the law and, also, the FBI is very incentivized to get this stuff right since if they don’t any questions could be thrown out). The public safety exception was apparently introduced in a case where the police were chasing a rapist who, the victim informed them, had a gun. When they cornered him in a grocery he had an empty holster and the police asked where the gun was. The man, Benjamin Quarles, told the police where he hid the gun and they retrieved it. The court excluded the gun because the police had not read Quarles his rights. The ruling was appealed and eventually reached the supreme court, who decided that in situations where public safety was endangered suspects could be questioned without being read their Miranda rights. (I’m not entirely sure why I’m summarizing all this and I’d suggest reading the whole article.)
Anyway, the more interesting case, also mentioned in the FBI piece, seems like a case where the police raided an apartment in Brooklyn where two suspected bombers lived. “During the raid, both men were shot and wounded as one of them grabbed the gun of a police officer and the other crawled toward a black bag believed to contain a bomb. When the officers looked inside the black bag, they saw pipe bombs and observed that a switch on one bomb was flipped.” From there, the police used the public safety exception to question one of the bombers who had not yet been read his rights:
Officers went to the hospital to question Abu Mezer about the bombs. They asked Abu Mezer “how many bombs there were, how many switches were on each bomb, which wires should be cut to disarm the bombs, and whether there were any timers.” Abu Mezer answered each question and also was asked whether he planned to kill himself in the explosion. He responded by saying, “Poof.”
This case seems, at least to me, to be much closer to the root of the question. I don’t really understand how a gun hidden in a supermarket presents a public safety concern since presumably the police could search the market for the gun after arresting the suspect. However, this latter situation, where there was a big bag of bombs, some of them ready to explode, seems like a pretty reasonable time to question someone prior to their rights being read.
What’s interesting about this, though, is the question isn’t really whether you can question someone before their rights are read, since it’s obviously possible (and likely a frequent occurrence). But rather, in what situations can those questions be used in court against the suspect. Here, again, I agree with Slate: If the questions they asked Tsarnaev were about whether he had planted more bombs around Boston, then that’s fair game, but as soon as they move outside that things start to feel a lot less right.
Interesting, the FBI article goes on to explain that Abu Mezer, from the bag of bombs, felt the same way and eventually tried to get his last statement, about whether he intended to kill himself, thrown out:
Abu Mezer sought to suppress each of his statements, but the trial court permitted them, ruling that they fell within the public safety exception. On appeal, Abu Mezer only challenged the admissibility of the last question, whether he intended to kill himself when detonating the bombs. He claimed the question was unrelated to public safety. The circuit court disagreed and noted “Abu Mezer’s vision as to whether or not he would survive his attempt to detonate the bomb had the potential for shedding light on the bomb’s stability.”
Here, without reading the full decision or being a lawyer or knowing anything else about the case, I think I disagree with the court. Seems pretty thin to suggest that the police were given valuable information about the “stability” of the bomb by asking whether he intended to kill himself.
Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.
As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:
Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.
Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.
To be clear, I’m not saying this is a good or a bad thing (I do my best to stay away from that sort of stuff), but it’s definitely a real thing and an integral part of how we all experience culture today. When I opened the newspaper this morning it was as much to see how much I knew and how closely I’d followed as it was to learn something new about the chase. After reading the cover story that recounted the previous day’s events I turned to Brian Stetler’s appropriately titled News Media and Social Media Become Part of a Real-Time Manhunt Drama.
« Older posts | Newer posts »