A few weeks ago my friend Nick sent me a link to this epic 12-part series on Dennis Rodman’s basketball prowess. While Rodman has been in the news for some interesting reasons lately, prior to that he was a basketball player unlike any we’ve ever seen and this series sets out to prove the point. I was especially part of part 2(a)(i) on “Player Valuation and Conventional Wisdom,” which has a nice explanation on the battle between the eye-test and math-test in sports:
Yet chances are he remains skeptical of the crazy-talk he hears from the so-called “statistical experts” — and there is truth to this skepticism: a typical “fan” model is extremely flexible, takes many more variables from much more diverse data into account, and ultimately employs a very powerful neural network to arrive at its conclusions. Conversely, the “advanced” models are generally rigid, naïve, over-reaching, hubristic, prove much less than their creators believe, and claim even more. Models are to academics like screenplays are to Hollywood waiters: everyone has one, everyone thinks theirs is the best, and most of them are garbage. The broad reliability of “common sense” over time has earned it the benefit of the doubt, despite its high susceptibility to bias and its abundance of easily-provable errors.
For awhile now I’ve been fascinated with the idea of serendipity. I was actually going to write a book on the topic but had to shelve it when we started Percolate. (A choice I’m very happy with, as tech industry > book industry.) Anyway, the core idea of the book was going to be that serendipity is something you can both encourage and design for. Because of that I read anything I see that talks about serendipity and was pleasantly surprised by this post on Medium by Stef Lewandowski on the subject. I’ll let you read it yourself, he hits on a lot of the things I’ve been thinking about, but wanted to share a word he introduced me to: “Zemblanity.” As he explains, “Zemblanity, a word coined by William Boyd in his book Armadillo in the 1980s, is the polar opposite of serendipity.” He goes on to quote the book for the full definition:
So what is the opposite of Serendip, a southern land of spice and warmth, lush greenery and hummingbirds, seawashed, sunbasted? Think of another world in the far north, barren, icebound, cold, a world of flint and stone. Call it Zembla. Ergo: zemblanity, the opposite of serendipity, the faculty of making unhappy, unlucky and expected discoveries by design.
I’ve definitely found myself in zemblanity at times and I usually have to read my way out. It’s nice to have a word to use in case it happens again in the future.
I wrote a reasonably in-depth post over at the Percolate blog on my thoughts on the marriage of Google+ and Android. Here’s a snippet:
As we all know, Google has very publicly announced its intention to build G+ into a massive social platform at any cost. For awhile I think many simply nodded and metaphorically patted Google on the head, as if to say, “sure Google, whatever you say.” However, as Android has continued to grow, I’ve noticed something very interesting: It seems that Google’s plan to turn G+ into a platform is to hitch its wagon to Android. With over a billion users it’s hard to argue with that strategy.
Don’t mean to go Bitcoin heavy around here, but just ran across this article about how Bitcoin is basically an API for money and found it fascinating:
Traditional money does have APIs, but they are closed. You can program the merchant API of the VISA network if you are a trusted merchant. You can send and receive FIX messages if you are a stockbroker or exchange. Regular people, however, don’t even have APIs into their bank accounts, let alone the broader economy. Bitcoin changes all that by not only offering an API for accounts (wallets) and transactions, but also making that API available to everyone.
This idea, providing an API for something that doesn’t have one, is one of the more fascinating to me and a space I expect we’ll see lots of activity over the coming years. This, more or less, is what the whole spime/internet of things/industrial internet is about: Giving things that couldn’t talk the ability to talk through APIs. Not surprisingly, I’m especially interested in what this means for brands and how they can build out their own APIs. A few years ago I wrote about Starbucks APIs and, in a lot of ways, we think about Percolate as providing a similar interface for brands by codifying it into the system and making it available to consume via API.
So I’m on a pretty good blogging role, but they’ve all been quite serious. So, to celebrate Friday, and a 5-post week, here’s an excerpt from the New Yorker’s New Year’s Resolutions for an Anteater:
That’s right, you guessed it, I’m gonna eat a shit ton more ants.
I’m gonna eat them off a tree.
I’m gonna eat them off a coconut.
I’m gonna eat them off one of those classy, slanted desks from an antiques shop.
I’m gonna eat them off a globe, off a paperweight, off a ship in a bottle.
I’m gonna eat so many ants, you would think it’s what I’ve been put on Earth to do, just Hoover up ants like the weirdest thing in the world.
It’s like God was like, “Well, I think I overdid it with the ants.” And then had the idea to send some insane, bulky creature down to take care of them. And that’s me.
I never finished Taleb’s Black Swan, but I vividly remember a point from the opening chapter about what would have happened if a senator had pushed through legislation prior to September 11th to force all airlines to install locked cockpit doors. That person would have never recieved attention or recognition for preventing an attack, since we never would have known:
The person who imposed locks on cockpit doors gets no statuses in public squares, not so much as a quick mention of his contribution in his obituary. “Joe Smith, who helped avoid the disaster of 9/11, died of complications of liver disease.” Seeing how superfluous his measure was, and how it squandered resources, the public, with great help from airline pilots, might well boot him out of office . . .
I was reminded of this as I was reading Tim Harford’s Adapt and this point about how we interpreted the US domination of the first Gulf War:
Another example of history’s uncertain guidance came from the first Gulf War in 1990–1. Desert Storm was an overwhelming defeat for Saddam Hussein’s army: one day it was one of the largest armies in the world; four days later, it wasn’t even the largest army in Iraq. Most American military strategists saw this as a vindication of their main strategic pillars: a technology-driven war with lots of air support and above all, overwhelming force. In reality it was a sign that change was coming: the victory was so crushing that no foe would ever use open battlefield tactics against the US Army again. Was this really so obvious in advance?
While I don’t agree with Paul Krugman’s Bitcoin is Evil post from the weekend, his next post on Bitcoin did include this nugget I had never considered:
It occurs to me that part of the disconnect is that Bitcoin solved a major technical problem, one that people had been thinking about for about 20 years, and we nerds just can’t believe that it doesn’t also solve an economic problem. The technical problem is double spending–if I have some digital money, it’s easy enough to verify cryptographically that it’s real, but if I give it to you, how can you tell that I haven’t also given it to someone else? Until Bitcoin, the answer was to have a bank that knew which coins were valid, so you’d present my coin to the bank, which would check its database and if it’s valid, cancel it and give you a new one. Bitcoin has its decentralized blockchain which is a very clever recasting of the problem so that the state of the “bank” is whatever the majority of bitcoin miners agree that it is. Getting enough of the miners to agree is known as the Byzantine Generals problem, and has a technical history of its own.
As simple as it seems, splitting out the technical achievement from the economic one is a really interesting way to think about it. I haven’t really spent enough time thinking about Bitcoin to have a strong opinion about it specifically, but I do think the idea of a crypto-currency not attached to a specific nation is something bound to happen. In a way, I feel like Bitcoin and Google Glass have a lot in common: Early experiments in form and style that signal what’s to come.
A few weeks ago I saw Malcolm Gladwell speak at an American Express OPEN event about entrepreneurship and his new book, David & Goliath. As part of the talk he mentioned while entrepreneurs are often viewed as financial risk takers, their greatest skill is often their ability to take social risks. Here’s BusinessWeek describing an article of his from 2010 (the full New Yorker article is behind the subscription wall):
While entrepreneurs want to minimize their financial risk, they’re often more willing to take social risks. During the housing bubble, people thought Paulson was crazy — including the people on the other side of his trades. Sam Walton, another of Gladwell’s examples, borrowed money from his in-laws rather than go to a bank. The willingness to risk reputation and social standing is “just another manifestation of their relentlessly rational pursuit of the sure thing,” he writes.
Anyway, I was reminded of the idea while reading this short piece about some research on the effect of informality on power. Essentially we see non-conformity as a sign of status:
Silvia Bellezza, a doctoral candidate at Harvard Business School, and Francesca Gino and Anat Keinan, two professors there, first studied the link between accomplishment and informality. They found that scholars who dressed down at an academic conference, eschewing blazers for T-shirts, had stronger research records, even controlling for age and gender.
Of course this kind of goes against Gladwell’s point, which suggests that the hardest part is the non-conformity … So not sure what it proves. But it’s interesting.
Just posted something new over at the Percolate blog that I thought might be worth sharing here. It’s my four tips for building community based on what I’ve learned from this blog (a community I’ve let lapse) and likemind. I’ll let you read the whole thing over there, but here’s point number 2, respond to everything and everyone:
This is something I (used) to do on my blog and with likemind and believe it had a huge impact. Real communities need to feel connection and your job is to be at the center of that. To make that happen everyone needs to believe there’s a real person on the other end. For likemind that meant emailing back every single person who signed up for the mailing list anywhere in the world. This started lots of conversations and generally let people know this wasn’t just another networking event.
I’m a really big fan of security analyst/guru/cryptographer Bruce Schneier. I’ve been reading his blog for years and actually got a chance to meet him in November at a talk he did for a very small room of us on the NSA and just about anything else anyone wanted to talk about. Schneier is one of the people Edward Snowden allowed access to his documents, which obviously gives him a particularly interesting point of view on the subject. His basic take was best summarized in three statements: (1) This isn’t overly surprising and won’t be going away anytime soon, (2) the very best thing that happened out of all this is that the private companies involved have been exposed and some, like Cisco, have seen their business fundamentally hurt, and (3), everything else aside the one thing to know about everything the NSA was/is doing is that it doesn’t work. The last is obviously the most damming (and Schneier is definitely not the only one saying this). This method of collecting everything with hope of finding something just doesn’t work as well as good, old-fashioned, detective work.
Interestingly I was talking about the Snowden/NSA stuff with a friend from DC who mentioned that the story hadn’t gotten a ton of coverage there (as compared to government shutdown or Healthcare.gov) because it’s perceived as an issue people don’t really have a problem with. Basically we have seen over and over again that we’re willing to throw away liberties for our “freedom” and to fight “terrorism.” Not much to say on this one, just an interesting take.
Finally, and actually the real point of this post, was to share two interesting quotes from an interview Schneier did with Motherboard. The first is about our general perception of what’s secure and what’s not:
Probably the biggest problem with the public’s perception of security is that things are secure as a default. We see this a lot in the voting industry. The voting machine companies will come up with an internet voting machine or electronic voting machine and the onus will be on the security company to prove that it’s broken. It’ll be assumed secure, and that’s just nonsense. When you see a new system, you have to assume it’s insecure, unless you can prove it’s secure. The public perception is reversed. “I have a door lock, it’s secure unless you show me you can break it.” That’s not right—it’s insecure unless you can show me that it is secure.
The second is on the sort of security threats Schneier finds most threatening:
I’m most worried about potential security vulnerabilities in the powerful institutions we’re trusting with our data, with our security. I’m worried about companies like Google and Microsoft and Facebook. I’m worried about governments, the US and other governments. I’m worried about how they are using our data, how they’re storing our data, and what happens to it. I’m less worried about the criminals. I think we’ve kinda got cyber-crime under control, it’s not zero but it never will be. I’m much more worried about the powerful abusing us than the un-powerful abusing us.
Consider this part of an early New Year’s resolution to blog more (I really am going to make a run at it in 2014). Anyway, over the holiday break I, along with many others I’m sure, was having a conversation about Healthcare.gov. I mostly mentioned all the stuff I wrote a few months ago (basically that the things that ruined the project seem to be all the regular stuff — scope creep, too many players — that ruins projects), but I also talked a bit about my disappointment with the media’s reporting of the story. Specifically, the inability to do any serious technical reporting.
The New York Times had the deepest reporting I read and that didn’t come close to actually explaining what went wrong. The story included laughable (to technologists) lines like this: “By mid-November, more than six weeks after the rollout, the MarkLogic database — essentially the website’s virtual filing cabinet and index — continued to perform below expectations, according to one person who works in the command center.” While I understand not everyone is familiar with a database, to call it a virtual filing cabinet and index only says to me that the author has absolutely no idea what a database is.
The point isn’t to pick on the Times, though. Rather it’s just to point out that as technical stories continue to pile up (NSA and Healthcare.gov were amongst the biggest media focus areas of the last three months), we’re going to have to get better at technical reporting. That I still haven’t read a decent explanation of what went wrong technically seems, to me at least, as a major disservice and a dangerous signal for society’s ability to keep up with technical change.
I wrote a little post over on the Percolate blog about Android and how they handle sharing. A snippet:
Overall I’ve been very impressed, but the point of this isn’t to do a review of iOS versus Android (specifically 4.3 Jelly Bean). That seems useful to do at some point, but for now I want to talk about “intents”. This is the function that allows one application to pass you to another for a specific action. The place you see this most is in the share intent, which allows you to hit share in any application and have access to all the other apps you have installed that you might want to share that piece of content on.
This, for me, makes Android feel a lot more social than iOS, which requires each application to hard-code in their sharing functionality (except for the Facebook and Twitter integrations which happen at the app level). This is awesome both from a user experience standpoint (I don’t have to copy and paste anything) as well as a developer standpoint (if you’re building apps you don’t have to make decisions on which platforms to put in/leave out and you can easily register your app as a share service and make it available inside other apps).
Read the whole thing.
Meant to post this last week, but didn’t get to it. Felix Salmon wrote a great post on how wine is one of the few things in the world you can buy for happiness. I’m fascinated by stuff like this:
The more you spend on a wine, the more you like it. It really doesn’t matter what the wine is at all. But when you’re primed to taste a wine which you know a bit about, including the fact that you spent a significant amount of money on, then you’ll find things in that bottle which you love. You can call this Emperor’s New Clothes syndrome if you want, but I like to think that there’s something real going on. After all, what you see on the label, including what you see on the price tag, is important information which can tell you a lot about what you’re drinking. And the key to any kind of connoisseurship is informed appreciation of something beautiful.
This is the messy part of branding. All those factors play into how we feel about brands, products, and just about everything else.
This critique of Zimbardo’s famous Stanford Prison Experiment is really fascinating. Basically the author, who writes intro to psychology textbooks, suggests that the experiment was flawed because it urged students to act in the way they thought typical guards and prisoners would act. Here’s an excerpt that captures it pretty well:
In a nutshell, here’s the criticism, somewhat simplified. Twenty-one boys (OK, young men) are asked to play a game of prisoners and guards. It’s 1971. There have recently been many news reports about prison riots and the brutality of guards. So, in this game, what are these young men supposed to do? Are they supposed to sit around talking pleasantly with one another about sports, girlfriends, movies, and such? No, of course not. This is a study of prisoners and guards, so their job clearly is to act like prisoners and guards—or, more accurately, to act out their stereotyped views of what prisoners and guards do. Surely, Professor Zimbardo, who is right there watching them (as the Prison Superintendent) would be disappointed if, instead, they had just sat around chatting pleasantly and having tea. Much research has shown that participants in psychological experiments are highly motivated to do what they believe the researchers want them to do. Any characteristics of an experiment that let research participants guess how the experimenters expect or want them to behave are referred to as demand characteristics. In any valid experiment it is essential to eliminate or at least minimize demand characteristics. In this experiment, the demands were everywhere.
I find stuff like this really interesting. I think most research is flawed in that it asks people questions they aren’t really prepared to answer and in turn forces them to come up with a conclusion. I thought about this a lot when I made Brand Tags and people were asking me to put up logos that no one had seen before so they could get feedback. I would always argue that this was measuring brand perception and if no one knew your brand they would just comment on your logo, which isn’t particularly helpful. Brands, ultimately, are the sum total of all the experiences one has and no one ever experiences one by just seeing a logo on a blank page. They hear about it, see it on a shelf next to another product, or any number of other contextual clues. Obviously this situation is pretty different, but I think it’s part of a very broad mistake research makes in not controlling for context (or lack thereof).
This Ars Technica story of some malware that can transmit itself even when all the obvious transmission vehicles (power, Bluetooth, Wifi) has been removed is mind-boggling:
Ruiu said he arrived at the theory about badBIOS’s high-frequency networking capability after observing encrypted data packets being sent to and from an infected machine that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when one of the machines had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine’s power cord to rule out the possibility it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped. With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on.
Last weekend I, like many others, read the New York Times story about the troubles with Healthcare.gov with great interest. The project was marred with things that any of us who have developed on the web have experienced working on a digital project with a launch date: Moving deadlines, late specs, different parties with different interests, and an ever-expanding scope.
The thing that was most surprising wasn’t that any of these things are at all odd, it was in fact how familiar they all sound. Take this, for instance:
Deadline after deadline was missed. The biggest contractor, CGI Federal, was awarded its $94 million contract in December 2011. But the government was so slow in issuing specifications that the firm did not start writing software code until this spring, according to people familiar with the process. As late as the last week of September, officials were still changing features of the Web site, HealthCare.gov, and debating whether consumers should be required to register and create password-protected accounts before they could shop for health plans.
As I was walking down Broadway a few days later I had a conversation with another Percolator
about why it was taking so long to finish construction on the street. If you’ve spent any time in New York (or just about anywhere else) you’ve wondered how a public works project could drag on for the seemingly endless amount of time it does. But then I started to think about Healthcare.gov and how easy it is, relatively, to build digital things instead of physical things. Assuming those same issues happen on the group (scope creep, competing interest groups, etc.), for the first time I felt like I could understand what was going on. Then when you start to layer on physical accessibility laws (ramps, etc.) and the fact that you can’t launch an alpha staircase (three stairs instead of ten), it all became pretty clear.
Now this isn’t to say there’s an excuse and we shouldn’t be able to complete these projects more efficiently, but at least I could see where the time goes (and feel grateful that I a) don’t make things that involve digging up the street and b) work in a world where we don’t have to deal with the nonsense).
Matthew Yglesias has a short little article about how J.C. Penny is killing wifi in their stores, one of the things Ron Johnson put in place during his very short tenure:
I can think of no better example of how Ron Johnson destroyed J.C. Penney than the company’s slightly ridiculous plan to offer free Wi-Fi in all its stores. That this was a bad idea is not the reason the store’s been struggling. But the fact that Johnson couldn’t see that this simply isn’t something his customers would have any real desire to take advantage of spoke volumes about the strategic errors happening in Penneyland.
I’m not sure this was such a stupid move by Johnson. Speaking to a guy at the American Marketing Association conference in New Orleans last week, he made the point that in many of these giant stores (he used to work at Target), the cell coverage is so bad that wifi is the only way to get coverage and potentially do something interesting with customers’ mobile phones in store. If we assume that mobile research will be an important part of buying in the future then it might turn out they’ll just have to go back and reinstall all that wifi anyway.
I love this short New Yorker video about Greg Packer, a man who really loves to see his name in print.
I’m incredibly proud of this blog post by James OB, one of the engineers at Percolate, about why he likes the engineering culture at the company. The whole thing is well worth a read, but I especially liked this bit:
The autonomy to solve a problem with the best technology available is a luxury for programmers. Most organizations I’ve been exposed to are encumbered, in varying degrees, by institutional favorites or “safe” bets without regard for the problem to be solved. Engineering at Percolate has so far been free of that trap, which results in a constantly engaging, productive mode of work.
My friend Philip James is riding his motorcycle around the world. He’s just passed through Mongolia and his account is amazing.
However, this is what I’m here for, and I weave my way slowly across the countryside. At times the going is fast, and I follow nomad tracks across the hillsides, at other’s it’s slow going as I navigate the bike over rocks, through sand, and deep mud. There are ancient valleys, mountain passes, and lots of rivers. Some have crude private bridges crossing them, other’s dilapidated bridges that have fallen into disrepair and seeming disuse. I either cross these at my own peril, or ride up and down the river bank looking for the widest and shallowest place to ride across. As I’m solo, I can’t take as many chances as groups can, and a spill in deep water would ruin all my gear, and maybe possibly even the bike’s engine. And so, when the river looks dicey, I stop, get off the bike, and wade across. My boots come up to nearly my knees, but the water typically washes over them. That’s fine, as I can easily ride through water two feet deep. Much higher and the wheels are fully submerged, and as the water comes over the seat, the air intake, exhaust and the bike’s main computer are all dangerously close to becoming submerged. With these crossings I spend much of the next week with sodden feet.
There is an interesting little article on innovation and Picasso over at Medium. Basically it suggests that radical innovation happens when the market is most receptive to it:
Sgourev’s analysis of Cubism suggests that having an exceptional idea isn’t enough: if it is to catch fire, the market conditions have to be right. That’s a question of luck and timing as much as it is of genius. The closest modern analogy to Picasso’s Paris is Silicon Valley in the early days of the dotcom boom, with art dealers as venture capitalists and entrepreneurs as artists.
This reminded me a lot of Duncan Watts’ research on influence on the web, where he concluded, “large scale changes in public opinion are not driven by highly influential people who influence everyone else, but rather by easily influenced people, influencing other easily influenced people.” In fact, Watts also used a fire to explain the dynamic in his conclusion:
Some forest fires, for examples, are many times larger than average; yet no-one would claim that the size of a forest fire can be in any way attributed to the exceptional properties of the spark that ignited it, or the size of the tree that was the first to burn. Major forest fires require a conspiracy of wind, temperature, low humidity, and combustible fuel that extends over large tracts of land. Just as for large cascades in social influence networks, when the right global combination of conditions exists, any spark will do; and when it does not, none will suffice.
The challenge, of course, as Watts points out in his research, is that consistently finding and predicting this environment is all but impossible. We may understand some of the factors, but the situation is just too complex to be anywhere near accurate. As much as we give credit to innovators who capture those radical moments, we also need to appreciate the role of luck in their success.
Good short article from Wages of Wins on the economics of doping in sports. On the A-Rod situation:
You may find yourself arguing: isn’t it costly for a player to sit out the games? If A-Rod is denied the 2014 season, he will give up some income, right? True–he might. But, the decision to break the rules and take the banned substances is really made based on the player’s expected benefits weighed against the expectedcosts. Nobel Prize winning economist Gary Becker introduced this principle in his paper Crime and Punishment: An Economic Approach (1965). The expected costs are equal to the penalty (i.e., the game suspension or ban) multiplied by the probability of getting caught and the probability of being punished (having the penalty applied). So, even if the 2014 ban holds, A-Rod will still have three years on his contract at $61 million (plus incentives for various homerun milestones)! From his public comments one gathers A-Rod is not expecting the penalty to be applied in full. So, no matter how you slice it up, A-Rod’s behavior–though illegal–was rational economically speaking. And, that is why tomorrow’s PED headline will be old news.
Yesterday James, my co-founder at Percolate, sent me over a really interesting nugget about how Apple structures its company about 35 minutes into this Critical Path podcast. Essentially Horace (from Asymco) argues that Apple’s non-cross-functional structure actually allows it to innovate and execute far better than a company structured in a more traditional, non-functional, way. As opposed to most other companies where managers are encourages to pick up experience across the enterprise, Apple encourages (or forces), people to stay in their role for the entirety of their career. On top of that, roles are not horizontal by product (head of iPhone) and instead are vertical by discipline (design, operations, technologies) and also quite siloed. He goes on to say that the only parallel he could think of is the military, who basically operates that way. (I know I haven’t done the best job articulating it, that’s because as I listen again I don’t necessarily think the thesis is articulated all that well.)
Below is my response back to James:
While I totally agree with what he says about the structure (that they’re organized functionally and it works for them), I’m not sure you can just conclude that’s ideal or drives innovation. The requirement of an org structure like that is that all vision/innovation comes from the top and moves down through the organization. That’s fine when you have someone like Jobs in charge, but it’s questionable what happens when he leaves (or when this first generation he brought up leaves maybe). Look at what happened when Jobs left the first time as evidence for how they lost their way. Apple is a fairly unique org in that it has a very limited number of SKUs and, from everything we’ve heard, Jobs was the person driving most/all.
My question back to Horace would be what will Apple look like in 20 years. IBM and GE are 3x older than Apple is and part of how they’ve survived, I’d say, is that they’ve built the responsibility of innovation into a bit more of a cross-functional discipline + centralized R&D. I don’t know if it matters, but if I was making a 50 year bet on a company I’d pick GE over Apple and part of it is that org structure and its ability to retain knowledge.
Military is actually a perfect example: Look at the struggles they’ve had over the last 20 years as the enemy stopped being similarly structured organizations and moved to being loosely connected networks. History has shown us over and over centralized organizations struggle with decentralized enemies. Now the good news for Apple is that everyone else is pretty much playing the same highly organized and very predictable game (with the exception of Google, who is in a functionally different business and Samsung, who because of their manufacturing resources and Asian heritage exist in a little bit of a different world).
Again, in a 10 year race Apple wins with a structure like this. But in a 50 year race, in which your visionary leader is unlikely to still be manning the helm, I think it brings up a whole lot of questions.
I’m a sucker for all quotes about how one thing or another was going to ruin society. Most of these are about media, but I couldn’t help myself when I saw this one about curiosity from an article on The American Scholar:
Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.
I actually think it would be pretty great to collect all these in a big book … A paper book, of course.
I really love this quote which came from an article Umberto Eco wrote about Wikileaks by way of this very excellent recap of a talk by the head of technology at the Smithsonian Cooper-Hewitt National Design Museum:
I once had occasion to observe that technology now advances crabwise, i.e. backwards. A century after the wireless telegraph revolutionised communications, the Internet has re-established a telegraph that runs on (telephone) wires. (Analog) video cassettes enabled film buffs to peruse a movie frame by frame, by fast-forwarding and rewinding to lay bare all the secrets of the editing process, but (digital) CDs now only allow us quantum leaps from one chapter to another. High-speed trains take us from Rome to Milan in three hours, but flying there, if you include transfers to and from the airports, takes three and a half hours. So it wouldn’t be extraordinary if politics and communications technologies were to revert to the horse-drawn carriage.
In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:
It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”
All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.
So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.
What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.
I’m sure you’ve all seen this quote. It’s attributed to Robert Stephens, founder of Geek Squad, and goes something like: “Advertising is the tax you pay for being unremarkable.” (I was reminded of it most recently reading Josh Porter’s blog, Bokardo.) It sounds good and, at first blush, correct, but it’s not for lots of reasons.
Broadly, the line between advertising, marketing, branding, and communications has always been a blurry one. Depending on who you talk to they have a very different definition. For the purposes of the quote, let’s assume when Stephens was talking about advertising he was specifically referring to the buying of media space across platforms like television, magazines, and websites.
With that as the working definition, there are lots of complicated reasons big companies advertise their products. Here are a few:
- Distributors love advertising: If you’re a CPG company you advertise as much for the supermarkets as your do for your product. The more money you spend the better spot they’re willing to give you on the shelf (the thought being that people will be looking for your product). I don’t think there is anyone out there that would argue shelf placement doesn’t matter. At the end of the day supermarkets are your customer if you’re a CPG company, so keeping them happy is a pretty high-priority job.
- Advertising is good at making people think you’re bigger than you are: Sometimes a company or brands wants to “play above its weight,” making people think they’re bigger than they’re actually are. When we see something on TV or in print, we mostly assume there is a big corporation behind it. Sometimes that’s more important than actually selling the product.
- Sometimes you’re not selling a product at all: There are many companies who advertise for reasons wholly disconnected from their product. GE, for example, isn’t running TV commercials about wind turbines to solely try to communicate with the thousands of people who are potentially in the market for a multi-million dollar purchase. A part of why they do it is to communicate with the public at large who is both a major shareholder for the company and also the end consumer of many of their products (many planes we fly on run GE engines and our electricity probably wouldn’t reach our house without GE products). How remarkable their products are has no bearing in this case, since we would never actually be in the market for the vast majority of the things they produce.
Broadly, though, the point I’m trying to make is that while many write off advertising as having no purpose (or being “a tax”), it’s just not true. What’s more, as advertising becomes a more seamless part of the process of being a brand in social, I think this will only become more true. If you see a piece of content performing well on Twitter or Facebook why would you not pay to promote that content and see it reach an audience beyond the core? At that point you’ve eliminated the biggest challenge traditionally associated with advertising (spending tons of money to produce something and having no idea whether it will actually have an effect on people). Seems to me if you’re not willing to entertain the idea you’re just standing on principle.
This week’s NYTimes Magazine economics column is all about timesheets. While the whole thing is worth a read, I found the history of timesheets especially interesting:
The notion of charging by units of time was popularized in the 1950s, when the American Bar Association was becoming alarmed that the income of lawyers was falling precipitously behind that of doctors (and, worse, dentists). The A.B.A. published an influential pamphlet, “The 1958 Lawyer and His 1938 Dollar,” which suggested that the industry should eschew fixed-rate fees and replicate the profitable efficiencies of mass-production manufacturing. Factories sold widgets, the idea went, and so lawyers should sell their services in simple, easy-to-manage units. The A.B.A. suggested a unit of time — the hour — which would allow a well-run firm to oversee its staff’s productivity as mechanically as a conveyor belt managed its throughput. This led to generations of junior associates working through the night in hopes of making partner and abusing the next crop. It was adopted by countless other service professionals, including accountants.
In what I assume is a response to this article that was floating around about placebo buttons (buttons that are there to make you feel better, but don’t do anything), William Gibson tweeted this:
I love the internet. That’s all.
I don’t know that I have a lot more to add than what Russell wrote here, but I like the way he described the challenge of describing something that is simultaneously happening the past and present (in this case, describing a soccer replay):
This is normally dismissed as typical footballer ignorance but it’s better understood when you think of a footballer standing infront of a monitor talking you through the goal they’ve just scored. They’re describing something in the past, which also seems to be happening now, which they’ve never seen before. The past and the present are all mushed up – it’s bound to create an odd tense.
I spend a lot of time thinking about building products (and, more specifically, building teams to build products). With that in mind I really enjoyed this rc3.org post about the seven signs of a dysfunctional engineering team, especially this bit about building tools instead of process:
Preference for process over tools. As engineering teams grow, there are many approaches to coordinating people’s work. Most of them are some combination of process and tools. Git is a tool that enables multiple people to work on the same code base efficiently (most of the time). A team may also design a process around Git — avoiding the use of remote branches, only pushing code that’s ready to deploy to the master branch, or requiring people to use local branches for all of their development. Healthy teams generally try to address their scaling problems with tools, not additional process. Processes are hard to turn into habits, hard to teach to new team members, and often evolve too slowly to keep pace with changing circumstances. Ask your interviewers what their release cycle is like. Ask them how many standing meetings they attend. Look at the company’s job listings, are they hiring a scrum master?
One of the thing I try to communicate to the whole company is that it’s everyone’s responsibility to build products. Products are reusable and scalable assets. While process is a product, tools are (almost always) better. I am working on a long and sprawling explanation of all my product thinking stuff, but that will have to wait for another day.
I really like this little post on “Borges and the Sharknado Problem.” The gist:
We can apply the Borgesian insight [why write a book when a short story is equally good for getting your point across] to the problem of Sharknado. Why make a two-hour movie called Sharknado when all you need is the idea of a movie called Sharknado? And perhaps, a two-minute trailer? And given that such a movie is not needed to convey the full brilliance of Sharknado – and it is, indeed, brilliant – why spend two hours watching it when it is, wastefully, made?
On Twitter my friend Ryan Catbird
responded by pointing out that that’s what makes the Modern Seinfeld Twitter account
so magical: They give you the plot in 140 characters and you can easily imagine the episode (and that’s really all you need).
This morning I woke up to this Tweet from my friend Nick:
It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.
Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.
Second issue came in a paragraph about the scalability of content for brands:
Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.
Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.
Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.
One of the podcasts I’ve been enjoying as of late its Tim Harford’s Pop Up Ideas from the BBC. In the latest episode David Kilcullen talks about feral cities (direct MP3 link), which essentially flip the idea of the failed state on its head, suggesting that it’s not the state that fails the city, but rather the city that fails the state (the podcast has a deeper explanation). Here’s a bit more from a short New York Times piece on the idea from a few years ago:
Richard Norton, a Naval War College scholar who has developed a taxonomy of what he calls feral cities, says that there are numerous places slipping toward Mogadishu, perhaps the only fully feral city nowadays. As public services disintegrate, residents are forced to hire private security or pay criminals for protection. The police in Brazil have fallen back on a containment policy against gangs ruling the favelas, while the rich try to stay above the fray, fueling the busiest civilian helicopter traffic in the world (there are 240 helipads in S-o Paulo; there are 10 in New York City). In Johannesburg, much of downtown, including the stock exchange, has been abandoned to squatters and drug gangs. In Mexico City, crime is soaring despite the presence of 91,000 policemen. Karachi, Pakistan, where 40 percent of the population lives in slums, plays host to gangland violence and to Al Qaeda cells.
I like this explanation of the importance of privacy from Glenn Greenwald, who has been the main outlet for all things Snowden:
And let me just say one other thing: sometimes it is hard to convey why privacy is so important, because it’s kind of ethereal. But I think people instinctively understand the reason it’s so important, because they do things like put passwords on their email accounts and locks on their bedroom and bathroom doors, which reflect a desire to keep others out of certain spaces where they can go to be alone. That’s a way of making clear that they value privacy. And the reason privacy is so critical is because it’s only when we know we’re not being watched that we can engage in creativity, or dissent, or pushing the boundaries of what’s deemed acceptable. A society in which people feel like they’re always being watched is one that breeds conformity, because people will avoid doing anything that can prompt judgment or condemnation. This is a crucial part of why a surveillance state is so damaging — it’s why all tyrannies know that watching people is the key to keeping them in line. Because only when you’re not being watched can you really be a free individual.
Clive Thompson, writing about finding the cruise ship that crashed in Italy last year on Google maps (Maps link here), made a really interesting point about how we interpret strange visuals in the age of digital technology and video games:
I remember, back when the catastrophe first occurred, being struck by how uncanny — how almost CGI-like — the pictures of the ship appeared. It looks so wrong, lying there sideways in the shallow waters, that I had a sort of odd, disassociative moment that occurs to me with uncomfortable regularity these days: The picture looks like something I’ve seen in a some dystopic video game, a sort of bleak matte-painting backdrop of the world gone wrong. (In the typically bifurcated moral nature of media, you could regard this either as a condemnation of video games — i.e. they’ve trained me to view real-life tragedy as unreal — or an example of their imaginative force: They’re a place you regularly encounter depictions of the terrible.) At any rate, I think what triggers this is the sheer immensity of the ship; it’s totally out of scale, as in that photo above, taken by Luca Di Ciaccio.
Growing up playing video games I definitely know the feeling. I do wonder, though, whether this is actually a new feeling or we could have said the same about feeling like something was a movie when that was still transforming how we saw the world. When I was in Hong Kong in December, for example, I felt like it was more a reflection of Blade Runner than anything else, for what it’s worth. Either way, though, it’s an interesting notion.
Lifehacker has an answer for why you need to turn your router off for 10 seconds:
A lot of modern technology contains capacitors! These are like energy buckets, little batteries that fill up when you put a current through them, and discharge otherwise. 10 seconds is the time it takes most capacitors to discharge enough for the electronics they’re powering to stop working. That’s why when you turn your PC off at the wall, things like an LED on your motherboard take a few seconds to disappear. You probably could wait a different time, but 10 seconds is the shortest time you can be sure everything’s discharged.
I’ve always wondered about that …
Matthew Yglesias makes a decent argument that Apple Maps, while a terrible product, is succeeding at its intended goal:
To get out of that bind, Apple has never needed to make a product that’s actually superior to Google Maps. What they’ve needed to do is produce an application that clears two bars. One is that it has to be good enough that your typcial doesn’t-care-too-much phone consumer doesn’t reject iOS out of hand. The other is that it has to be good enough such that if Google doesn’t want to lose the entire iOS customer base it has to scramble and release a great Google Maps app for iOS and not just for Android. Apple’s Maps app easily clears both of those bars. Before the release of iOS 6, the inferiority of Apple’s Google-powered iOS Maps app to Android’s Google maps was a real reason to prefer an Android phone. Today, there is no such reason. Not because Apple Maps is as good at Google Maps, but because Google Maps for iOS is as good as Google Maps for Android.
This was actually part of the original Chrome strategy as well. While Google released the product because long-term they couldn’t afford to have their biggest competitor (at the time) controlling the majority of their usage, they also did it to push Internet Explorer to innovate so that Google could deliver a better and faster experience for its customers. By entering the browser market Google was able to light a fire under Microsoft that a company like Firefox never could and the versions of IE that followed were a thousand times better than what had existed before.
I can’t remember exactly where, but right after the DOMA decision I read an article that basically said part of the reason this happened so quickly is that people in political power were able to relate to the plight of LGBT since there is a chance their son or daughter is gay. On the contrary, as the article pointed out, a person in congress is unlikely to have someone poor in their family.
As I read Obama’s comments about the Travyon Martin decision it struck me how interesting it is to have a president who can actually say something like this:
There are, frankly, very few African-American men who haven’t had the experience of walking across the street and hearing the locks click on the doors of cars. That happens to me, at least before I was a senator. There are very few African-Americans who haven’t had the experience of getting on an elevator and a woman clutching her purse nervously and holding her breath until she had a chance to get off. That happens often. And I don’t want to exaggerate this, but those sets of experiences inform how the African-American community interprets what happened one night in Florida.
However you feel about the decision, it seems that the law in Florida favored the last man standing and the jury made a decision that fell squarely in the bounds of the law as it was written. That doesn’t make it any less sad to see what happened or any more right that George Zimmerman decided to move towards a situation that he could have easily walked away from, but it does bring into focus the gap that exists between the people that write laws and the citizens those laws are meant to serve.
Overall, though, this feels like part of larger state of American politics that leaves people feeling shocked, while at the same time struggling to find the any individual situation shocking. I feel the same way about everything have to do with Prism, the NSA program to spy on citizens that we’ve all heard lots about at this point. I’ve been asked what I thought of it a few times and my general reaction has been exactly the same as the Martin case: Shocked, but not shocking. I’m not surprised our government is spying on its citizens and I believe Snowden should be treated as a whistleblower as long as he doesn’t release any details about America’s spying on foreign governments (not that I doubt they are, but I do think that’s a line where things become dangerous).
My big issue with PRISM and the culture around it is that it’s part of a larger move that allows constitutional decisions to be made outside the Supreme Court. As the New York Times reported a few weeks ago:
The rulings [of the secret surveillance court], some nearly 100 pages long, reveal that the court has taken on a much more expansive role by regularly assessing broad constitutional questions and establishing important judicial precedents, with almost no public scrutiny, according to current and former officials familiar with the court’s classified decisions.
I don’t have any problem at all with the government spying on people it thinks are bad guys, I just think it should be done within the framework of the law. For all the flaws of our government, the three-branch system the Constitution laid out is still a pretty good way to make sure no one party can consolidate too much power. What PRISM (and Guantanamo and lots of the other stuff that happened after September 11th) allow for are decisions that happen outside the system, and, judging from the experiences thus far with Guantanamo and PRISM, when that happens some basic Constitutional rights get trampled.
If there’s a bright side to all this it’s that we’re not so deep into this that I don’t think we can turn things around (at least on the PRISM/Guantanamo stuff, Travyon Martin and American political racism is a different story). The reality is that even though the world has certainly gotten more complex, we’re only 12 years into the meat of the movement to erode the system of checks and balances. I hope that the outing of PRISM and, ideally, the closing of Guantanamo will help apply some breaks to that trend. The goal, as odd as it may sounds, is to return to a time when finding out the government is spying on its citizens or throwing people in jail without telling them the charge, will once again be shocking.
This is three years old, but I just ran across it and it’s just as relevant today as it was then. Apparently in response to Nicholas Carr’s book The Shallows, Steven Pinker wrote a great op-ed about how technology isn’t really ruining all that stuff that technology is constantly claimed to be ruining. A snippet:
The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.
« Older posts | Newer posts »