Differing opinions (sort of) from the New York Times over whether technology is or isn’t what the science-fiction writers imagined. From a November article titled “In Defense of Technology”:
Physical loneliness can still exist, of course, but you’re never friendless online. Don’t tell me the spiritual life is over. In many ways it’s only just begun. Technology is not doing what the sci-fi writers warned it might — it is not turning us into digits or blank consumers, into people who hate community. Instead, there is evidence that the improvements are making us more democratic, more aware of the planet, more interested in the experience of people who aren’t us, more connected to the mysteries of privacy and surveillance. It’s also pressing us to question what it means to have life so easy, when billions do not. I lived through the age of complacency, before information arrived and the outside world liquified its borders. And now it seems as if the real split in the world will not only be between the fed and the unfed, the healthy and the unhealthy, but between those with smartphones and those without.
And now, in response to the Sony hack, Frank Bruni writes, “The specter that science fiction began to raise decades ago has come true, but with a twist. Computers and technology don’t have minds of their own. They have really, really big mouths.” He continues:
“Nothing you say in any form mediated through digital technology — absolutely nothing at all — is guaranteed to stay private,” wrote Farhad Manjoo, a technology columnist for The Times, in a blog post on Thursday. He issued a “reminder to anyone who uses a digital device to say anything to anyone, ever. Don’t do it. Don’t email, don’t text, don’t update, don’t send photos.” He might as well have added, “Don’t live,” because self-expression and sharing aren’t easily abandoned, and other conduits for them — landlines, snail mail — no longer do the trick.
But the New York Times doesn’t stop agreeing with itself there (by the way, this is totally fine, I have no problem with the contradictions and actually appreciate them). From “As Robots Grow Smarter, American Workers Struggle to Keep Up”:
Yet there is deep uncertainty about how the pattern will play out now, as two trends are interacting. Artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. … At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Americans between the ages of 55 and 64 are among the most skilled in the world, according to a recent report from the Organization for Economic Cooperation and Development. Younger Americans are closer to average among the residents of rich countries, and below average by some measures.
My opinion falls into the protopian camp: Things are definitely getting better, but new complexities are bound to emerge as things change. It’s not going to be simple and there are lots of questions we should be asking ourselves about how technology is changing us and the world, but it’s much healthier to start from a place of positivity and recognition that much of the change is good change.
Interesting point of distinction between Google and Facebook’s approach to the world in this long feature of Zuckerberg and Facebook’s approach to wiring the world. Specifically in reference to Facebook and Google buying drone companies as a possible approach to getting internet to rural areas:
Google also has a drone program—in April it bought one of Ascenta’s competitors, Titan Aerospace—but what’s notable about its approach so far is that it has been almost purely technological and unilateral: we want people to have the Internet, so we’re going to beam it at them from a balloon. Whereas Facebook’s solution is a blended one. It has technological pieces but also a business piece (making money for the cell-phone companies) and a sociocultural one (luring people online with carefully curated content). The app is just one part of a human ecosystem where every-body is incentivized to keep it going and spread it around. “Certainly, one big difference is that we tend to look at the culture around things,” Zuckerberg says. “That’s just a core part of building any social project.” The subtext being, all projects are social.
This is a pretty interesting point of difference between how the two companies view the world. Google sees every problem as a pure technical issue whereas Facebook sees it as part cultural and part technical. I’m not totally sure I buy it (it seems unfair to call Android a purely technical solution), but it’s an interesting lens to look through when examining two of the world’s most important tech companies.
I never finished Taleb’s Black Swan, but I vividly remember a point from the opening chapter about what would have happened if a senator had pushed through legislation prior to September 11th to force all airlines to install locked cockpit doors. That person would have never recieved attention or recognition for preventing an attack, since we never would have known:
The person who imposed locks on cockpit doors gets no statuses in public squares, not so much as a quick mention of his contribution in his obituary. “Joe Smith, who helped avoid the disaster of 9/11, died of complications of liver disease.” Seeing how superfluous his measure was, and how it squandered resources, the public, with great help from airline pilots, might well boot him out of office . . .
I was reminded of this as I was reading Tim Harford’s Adapt and this point about how we interpreted the US domination of the first Gulf War:
Another example of history’s uncertain guidance came from the first Gulf War in 1990–1. Desert Storm was an overwhelming defeat for Saddam Hussein’s army: one day it was one of the largest armies in the world; four days later, it wasn’t even the largest army in Iraq. Most American military strategists saw this as a vindication of their main strategic pillars: a technology-driven war with lots of air support and above all, overwhelming force. In reality it was a sign that change was coming: the victory was so crushing that no foe would ever use open battlefield tactics against the US Army again. Was this really so obvious in advance?
I’m incredibly proud of this blog post by James OB, one of the engineers at Percolate, about why he likes the engineering culture at the company. The whole thing is well worth a read, but I especially liked this bit:
The autonomy to solve a problem with the best technology available is a luxury for programmers. Most organizations I’ve been exposed to are encumbered, in varying degrees, by institutional favorites or “safe” bets without regard for the problem to be solved. Engineering at Percolate has so far been free of that trap, which results in a constantly engaging, productive mode of work.
I’m a sucker for all quotes about how one thing or another was going to ruin society. Most of these are about media, but I couldn’t help myself when I saw this one about curiosity from an article on The American Scholar:
Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.
I actually think it would be pretty great to collect all these in a big book … A paper book, of course.
I really love this quote which came from an article Umberto Eco wrote about Wikileaks by way of this very excellent recap of a talk by the head of technology at the Smithsonian Cooper-Hewitt National Design Museum:
I once had occasion to observe that technology now advances crabwise, i.e. backwards. A century after the wireless telegraph revolutionised communications, the Internet has re-established a telegraph that runs on (telephone) wires. (Analog) video cassettes enabled film buffs to peruse a movie frame by frame, by fast-forwarding and rewinding to lay bare all the secrets of the editing process, but (digital) CDs now only allow us quantum leaps from one chapter to another. High-speed trains take us from Rome to Milan in three hours, but flying there, if you include transfers to and from the airports, takes three and a half hours. So it wouldn’t be extraordinary if politics and communications technologies were to revert to the horse-drawn carriage.
In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:
It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”
All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.
So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.
What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.
I don’t know that I have a lot more to add than what Russell wrote here, but I like the way he described the challenge of describing something that is simultaneously happening the past and present (in this case, describing a soccer replay):
This is normally dismissed as typical footballer ignorance but it’s better understood when you think of a footballer standing infront of a monitor talking you through the goal they’ve just scored. They’re describing something in the past, which also seems to be happening now, which they’ve never seen before. The past and the present are all mushed up – it’s bound to create an odd tense.
I really like this little post on “Borges and the Sharknado Problem.” The gist:
We can apply the Borgesian insight [why write a book when a short story is equally good for getting your point across] to the problem of Sharknado. Why make a two-hour movie called Sharknado when all you need is the idea of a movie called Sharknado? And perhaps, a two-minute trailer? And given that such a movie is not needed to convey the full brilliance of Sharknado – and it is, indeed, brilliant – why spend two hours watching it when it is, wastefully, made?
On Twitter my friend Ryan Catbird
responded by pointing out that that’s what makes the Modern Seinfeld Twitter account
so magical: They give you the plot in 140 characters and you can easily imagine the episode (and that’s really all you need).
Clive Thompson, writing about finding the cruise ship that crashed in Italy last year on Google maps (Maps link here), made a really interesting point about how we interpret strange visuals in the age of digital technology and video games:
I remember, back when the catastrophe first occurred, being struck by how uncanny — how almost CGI-like — the pictures of the ship appeared. It looks so wrong, lying there sideways in the shallow waters, that I had a sort of odd, disassociative moment that occurs to me with uncomfortable regularity these days: The picture looks like something I’ve seen in a some dystopic video game, a sort of bleak matte-painting backdrop of the world gone wrong. (In the typically bifurcated moral nature of media, you could regard this either as a condemnation of video games — i.e. they’ve trained me to view real-life tragedy as unreal — or an example of their imaginative force: They’re a place you regularly encounter depictions of the terrible.) At any rate, I think what triggers this is the sheer immensity of the ship; it’s totally out of scale, as in that photo above, taken by Luca Di Ciaccio.
Growing up playing video games I definitely know the feeling. I do wonder, though, whether this is actually a new feeling or we could have said the same about feeling like something was a movie when that was still transforming how we saw the world. When I was in Hong Kong in December, for example, I felt like it was more a reflection of Blade Runner than anything else, for what it’s worth. Either way, though, it’s an interesting notion.
This is three years old, but I just ran across it and it’s just as relevant today as it was then. Apparently in response to Nicholas Carr’s book The Shallows, Steven Pinker wrote a great op-ed about how technology isn’t really ruining all that stuff that technology is constantly claimed to be ruining. A snippet:
The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.
In case you were on Twitter a few nights ago, there was a show on SyFy called Sharknado. It was, as you might expect, about sharks getting caught in a tornado. If you judged it’s popularity by Twitter alone you would have thought the whole world was watching. That, however, turns out not to be the case:
But Sharknado may have broken the mold; the movie blew up on Twitter last night, giving the impression that everyone with a TV was watching it. “Omg omg OMG #sharknado,” Mia Farrow tweeted last night, while Washington Post political reporter Chris Cillizza joked that he was writing an article about how Sharknado would affect the 2016 elections. But were all these people actually watching? According to the Los Angeles Times, Sharknado was watched by only 1 million people, which makes it a bust, even by Syfy standards. Most Syfy originals have an average viewership of 1.5 million people, with some getting twice that.
[Via Washington Post]
Really interesting post on the changing nature of photography from Kottke. He pulls together a few different thoughts on photography and basically lands at the idea that we’re moving to a future of after-the-fact photography:
In order to get the jaw-dropping slow-motion footage of great white sharks jumping out of the ocean, the filmmakers for Planet Earth used a high-speed camera with continuous buffering…that is, the camera only kept a few seconds of video at a time and dumped the rest. When the shark jumped, the cameraman would push a button to save the buffer.
Makes me wonder where this new photography will land on the memory versus experience spectrum
(an idea from Daniel Kahneman that we basically optimize our experiences for memory rather than experience, which is why we take photos instead of actually paying attention to what’s going on around us). I wonder if this doesn’t flip that notion.
Super Mario Brothers is Getting Harder
It may come as a shock to some of you that most gamers today can not finish the original Super Mario Brothers game on the Famicom. We have conducted this test over the past few years to see how difficult we should make our games and have found that the number of people unable to finish the first level is steadily increasing. This year, around 90 percent of the test participants were unable to complete the first level of Super Mario Brothers. We did not assist them in any way except by providing the exact same instruction manual we used back then. Many of them did not read it and the few that did stopped after the first page which did not cover any of the game mechanics.
UPDATE (7/7/13): As Rafi points out in the comments, it looks like this is satirical. One of the other stories on the site is CHILDREN WHO PICK A SIDE IN CONSOLE WARS ARE 90 PERCENT MORE LIKELY TO JOIN A GANG. Sorry about that.
Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.
As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:
Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.
Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.
To be clear, I’m not saying this is a good or a bad thing (I do my best to stay away from that sort of stuff), but it’s definitely a real thing and an integral part of how we all experience culture today. When I opened the newspaper this morning it was as much to see how much I knew and how closely I’d followed as it was to learn something new about the chase. After reading the cover story that recounted the previous day’s events I turned to Brian Stetler’s appropriately titled News Media and Social Media Become Part of a Real-Time Manhunt Drama.
I’ve been listening to a lot of podcasts lately, and one of them is New Yorker’s Out Loud. The last episode featured a great interview with Daniel Mendelsohn, a literary critic. In the podcast he mostly talks about the books that inspired him to become a writer, but then, towards the end, he talks a bit about the job of a cultural critic and I thought what he had to say was interesting enough to transcribe and share:
We now have these technologies that simulate reality or create different realities in very sophisticated and interesting ways. Having these technologies available to us allows us to walk, say, through midtown Manhattan but actually to be inhabiting our private reality as we do so: We’re on the phone or we’re looking at our smartphone, gazing lovingly into our iPhones. And this is the way the world is going, there’s no point complaining about it. But where my classics come in is I am amused by the fact our word idiot comes from the Greek word idiotes, which means a private person. It’s from the word idios, which means private as opposed to public. So the Athenians, or the Greeks in general who had such a highly developed sense of the readical distinction between what went on in public and what went on in private, thought that a person that brought his private life into public spaces, who confused public and private was an idiote, was an idiot. Of course, now everybody does this. We are in a culture of idiots in the Greek sense. To go back to your original question, what does this look like in the long run? Is it terrible or is it bad? It’s just the way things are. And one of the advantages about being a person who looks at long stretches of the past is you try not to get hysterical, to just see these evolving new ways of being from an imaginary vantage point in the future. Is it the end of the world? No, it’s just the end of a world. It’s the end of the world I grew up in when I was thinking of how you behaved in public. I think your job as a cultural critic is to take a long view.
I obviously thought the idiot stuff was fascinating, but also was interested in his last line about the job of a cultural critic, which, to me, really reflected something that struck me about McLuhan in the most recent biography of his by Douglas Coupland:
Marshall was also encountering a response that would tail him the rest of his life: the incorrect belief that he liked the new world he was describing. In fact, he didn’t ascribe any moral or value dimensions to it at all–he simply kept on pointing out the effects of new media on the individual. And what makes him fresh and relevant now is the fact that (unlike so much other new thinking of the time) he always did focus on the individual in society, rather than on the mass of society as an entity unto itself.
Walking around Tokyo today I passed a Bathing Ape store on got onto the topic of how the brand came to be. After a little Googling I ran across this excellent article that documents the fall of the brand and eventually to this interesting theory on “cultural arbitrage”:
The hipster elite are starting to show annoyance at this development. Former mo wax guru James Lavelle, quoted in Tokion, lamented that it is now impossible to stay “underground.” Lavelle and his kindred folk profit from exploiting cultural arbitrage: taking information from inaccessible sources and cashing in on that unequal access to information. (In general, a lot of people whom you probably think are cooler than you make a bulk of their money from this inequality in information.) No one in the West knew that Bape is a mainstream brand in Japan, and therefore, Lavelle was able to subtly and indirectly create the brand image to his own liking…* Now, with the high speed “information superhighway,” profit from cultural arbitrage business looks doubtful in the long run.
It’s not revolutionary, but it’s a nice way to think about how culture moves.
* I had to cut out a few sentences because they talk about how financial arbitrage used to work but no longer does, which just isn’t true.
This is pretty crazy:
Homeowners Luo Baogen and his wife refused to allow the government to demolish their home in Wenling, Zhejiang province, China, claiming the relocation compensation offered would not be enough to cover the cost of rebuilding. So, adjacent neighboring homes were dismantled, and, bizarrely, the road was built around the intact home, leaving it as an island in a river of new asphalt.
Crazy. Be sure to check out the pictures.
Related (sort of): If you’re interested in China, driving and highways you should check out the book Country Driving. It’s by a New Yorker writer who has lived in China for some time and chronicles the ever-expanding driving culture. Here’s a little snippet:
Many traffic patterns come directly from pedestrian life—people drive the way they walk. They like to move in packs, and they tailgate whenever possible. They rarely use turn signals. Instead they rely on automobile body language: if a car edges to the left, you can guess that he’s about to make a turn. And they are brilliant at improvising. They convert sidewalks into passing lanes, and they’ll approach a roundabout in reverse direction if it seems faster. If they miss an exit on a highway, they simply pull onto the shoulder, shift into reverse, and get it right the second time. They curb-sneak in traffic jams, the same way Chinese people do in ticket lines. Tollbooths can be hazardous, because a history of long queues has conditioned people into quickly evaluating options and making snap decisions. When approaching a toll, drivers like to switch lanes at the last possible instant; it’s common to see an accident right in front of a booth. Drivers rarely check their rearview mirrors. Windshield wipers are considered a distraction, and so are headlights.
The story of Fabrice Muamba from yesterday is hard to imagine. A professional football (the English kind) player had a heart attack during the game. The facts themselves are pretty crazy, but this article does a great job giving the broader context to what happened around the story:
Many said yesterday evening that football becomes irrelevant in such circumstances. This is partially true, but doesn’t tell the complete story of last night. When something such as this happens, the match that is taking place ceases to be of much importance, of course. The game, however, to the extent that “football” exists as an entity in and of itself, certainly doesn’t become irrelevant, and this much was demonstrated by the messages of support and concern that we saw last night. Football frequently seems to exist in a bubble, isolated and insulated from the outside world. When the full horror that real world can occasionally offer came calling last night, though, its humanity shone through. Considering what happened at White Hart Lane last night, it’s a tiny consolation. But a tiny consolation is better than no consolation at all.
Mimi Chun is the design director at General Assembly (the shared working space, not people behind #occupy) is working on baking series of cookies painted to match famous works of modern art. While the cookies look fantastic, I liked her point about makers versus viewers:
For makers, the value lies in the act of creation; for viewers: the outcome. Like others who have chosen similar vocations, I make things because I’m in love with making, because I can’t imagine a life without it, and because I secretly enjoy all of the angst, self-flagellation, and learning that comes with the territory. Given the option of: Would I prefer to A) spend every waking minute making terrible work that never saw the light of day or B) wake up every morning to discover that I had made amazing work in my sleep, I would choose A every time, and I’m willing to venture that I’m not alone here.
On the surface, Facebook adding these little business cards are not a big deal (other than the scale of any initiative the company takes on). But I do think there’s something more interesting here: This is another step in Facebook owning your identity in the physical world. They’ve already claimed you in the digital world and pretty much locked things up, but the physical world is still a hodgepodge of identities split between governments, banks and employers. There’s never really been a global holder of identity data before (to my knowledge) and I’m not sure I yet understand what the implications are, but I assume it’s something Facebook is thinking a lot about.
Facebook apparently gets so many requests to take down photos because they’re unflattering that they’ve added an additional option for “I don’t like this photo of me.” It doesn’t actually get a photo taken down, rather it’s “designed to trigger compassion from the photo posters.” I’m not sure why I find this so interesting, but something about the basic humanity of being embarrassed by a photo and having to find a way to deal with that through software is very interesting. In some ways I’m surprised we don’t hear about lots more stuff like this from Facebook, after all with almost a billion people on the platform they surely run into “human problems” on a regular basis.
At dinner this evening Leila and I got into a conversation about Italian words losing the last vowel (mozzarell instead of mozzarella). If you’re not from the New York area this will sound crazy, but it’s pretty common here (I remember hearing it growing up in Connecticut as well).
When I got home I tried to track down an article I remember reading years ago about this phenomena and while I can’t remember whether this was it, a New York Times article from 2004 offers up some ideas on how this happened:
In fact, in some parts of Italy, the dropping of final vowels is common. Restaurantgoers and food shoppers in the United States ended up imitating southern and northern dialects, where speakers often do not speak their endings, Professor Albertini said.
Liliana Dussi, a retired New York district director for the Berlitz language schools, said many first- and second-generation Italians whose ancestors immigrated to the United States before World War I were informally taught Italian expressions and the names of food, some of which has ended up part of everyday language in New York, New Jersey and Connecticut.
If you want more, this Chowhound thread is pretty excellent.
I love this quote about the difference between Zappos and Amazon culture from the Wired interview between Steven Levy and Jeff Bezos:
Levy: Two years ago, you bought Zappos. Was that an attempt to absorb their so-called culture of happiness and customer service?
Bezos: No, no, no. We like their unique culture, but we don’t want that culture at Amazon. We like our culture, too. Our version of a perfect customer experience is one in which our customer doesn’t want to talk to us. Every time a customer contacts us, we see it as a defect. I’ve been saying for many, many years, people should talk to their friends, not their merchants. And so we use all of our customer service information to find the root cause of any customer contact. What went wrong? Why did that person have to call? Why aren’t they spending that time talking to their family instead of talking to us? How do we fix it? Zappos takes a completely different approach. You call them and ask them for a pizza, and they’ll get out the Yellow Pages for you.
[Via James Gross]