Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

On Context, Imagination, and STEM vs. ART

Go read this whole interview with John Seely Brown. It’s awesome. Here’s a few of my favorite bits.

On content versus content:

Remember that image of the statue of Saddam Hussein being pulled down? Well, the photo was actually cropped. Those were Americans pulling the statue down, not Iraqis. But the cropped photo reinforced this notion that the Iraqis loved us. It reshaped context. Milennials are much better at understanding that context shapes content. They play with this all the time when they remix something. It’s actually an ideal property for a 21st century citizen to have.

That’s as good an explanation of what McLuhan meant by the medium is the message as I’ve read.

On creativity versus imagination:

The real key is being able to imagine a new world. Once I imagine something new, then answering how to get from here to there involves steps of creativity. So I can be creative in solving today’s problems, but if I can’t imagine something new, than I’m stuck in the current situation.

I really like that distinction. Imagination is what drives vision, creativity is what drives execution. Both have huge amounts of value, but they’re different things.

On the dangers of a STEM-only world:

Right. That’s what we should be talking about. That’s one of the reasons I think what’s happening in STEM education is a tragedy. Art enables us to see the world in different ways. I’m riveted by how Picasso saw the world. How does being able to imagine and see things differently work hand-in-hand? Art education, and probably music too, are more important than most things we teach. Being great at math is not that critical for science, but being great at imagination and curiosity is critical. Yet how are we training tomorrow’s scientists? By boring the hell out of them in formulaic mathematics—and don’t forget I am trained as a theoretical mathematician.

Not to talk about McLuhan too much, but he also deeply believed in the value of art and artists as the visionaries for society. I think there’s obviously a lot of room here and the reality of the focus on STEM is that we have so far to go that it’s not like we’re going to wake up in a world where people only learn math and science. But I think the point is that the really interesting thoughts that come along are the ones that combine, not shockingly, the arts and sciences.

Again, just go read the whole interview. It’s great.

January 14, 2014 // This post is about: , , , , , ,

Reporting Technology

Consider this part of an early New Year’s resolution to blog more (I really am going to make a run at it in 2014). Anyway, over the holiday break I, along with many others I’m sure, was having a conversation about Healthcare.gov. I mostly mentioned all the stuff I wrote a few months ago (basically that the things that ruined the project seem to be all the regular stuff — scope creep, too many players — that ruins projects), but I also talked a bit about my disappointment with the media’s reporting of the story. Specifically, the inability to do any serious technical reporting.

The New York Times had the deepest reporting I read and that didn’t come close to actually explaining what went wrong. The story included laughable (to technologists) lines like this: “By mid-November, more than six weeks after the rollout, the MarkLogic database — essentially the website’s virtual filing cabinet and index — continued to perform below expectations, according to one person who works in the command center.” While I understand not everyone is familiar with a database, to call it a virtual filing cabinet and index only says to me that the author has absolutely no idea what a database is. 

The point isn’t to pick on the Times, though. Rather it’s just to point out that as technical stories continue to pile up (NSA and Healthcare.gov were amongst the biggest media focus areas of the last three months), we’re going to have to get better at technical reporting. That I still haven’t read a decent explanation of what went wrong technically seems, to me at least, as a major disservice and a dangerous signal for society’s ability to keep up with technical change.

December 28, 2013 // This post is about: , , , ,

Global Time

In response to my little post about describing the past and present, Jim, who reads the blog, emailed me to say it could be referred to as an “atemporal present,” which I thought was a good turn of phrase. I googled it and ran across this fascinating Guardian piece explaining their decision to get rid of references to today and yesterday in their articles. Here’s a pretty large snippet:

It used to be quite simple. If you worked for an evening newspaper, you put “today” near the beginning of every story in an attempt to give the impression of being up-to-the-minute – even though many of the stories had been written the day before (as those lovely people who own local newspapers strove to increase their profits by cutting editions and moving deadlines ever earlier in the day). If you worked for a morning newspaper, you put “last night” at the beginning: the assumption was that reading your paper was the first thing that everyone did, the moment they awoke, and you wanted them to think that you had been slaving all night on their behalf to bring them the absolute latest news. A report that might have been written at, say, 3pm the previous day would still start something like this: “The government last night announced …”

All this has changed. As I wrote last year, we now have many millions of readers around the world, for whom the use of yesterday, today and tomorrow must be at best confusing and at times downright misleading. I don’t know how many readers the Guardian has in Hawaii – though I am willing to make a goodwill visit if the managing editor is seeking volunteers – but if I write a story saying something happened “last night”, it will not necessarily be clear which “night” I am referring to. Even in the UK, online readers may visit the website at any time, using a variety of devices, as the old, predictable pattern of newspaper readership has changed for ever. A guardian.co.uk story may be read within seconds of publication, or months later – long after the newspaper has been composted.

So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than 'tomorrow']” or “the government’s proposals, announced on Wednesday [rather than 'yesterday'], have been greeted with a storm of protest”.

What’s extra interesting about this to me is that it’s not just about the time you’re reading that story, but also the space the web inhabits. We’ve been talking a lot at Percolate lately about how social is shifting the way we think about audiences since for the first time there are constant global media opportunities (it used to happen once every four years with the Olympics or World Cup). But, as this articulates so well, being global also has a major impact on time since you move away from knowing where your audience is in their day when they’re consuming your content.

August 5, 2013 // This post is about: , , , ,

Borges and Sharknado

I really like this little post on “Borges and the Sharknado Problem.” The gist:

We can apply the Borgesian insight [why write a book when a short story is equally good for getting your point across] to the problem of Sharknado. Why make a two-hour movie called Sharknado when all you need is the idea of a movie called Sharknado? And perhaps, a two-minute trailer? And given that such a movie is not needed to convey the full brilliance of Sharknado – and it is, indeed, brilliant – why spend two hours watching it when it is, wastefully, made?

On Twitter my friend Ryan Catbird responded by pointing out that that’s what makes the Modern Seinfeld Twitter account so magical: They give you the plot in 140 characters and you can easily imagine the episode (and that’s really all you need).

July 30, 2013 // This post is about: , , , ,

On Sponsored and Scalable Brand Content

This morning I woke up to this Tweet from my friend Nick:

It’s great to have friends who discover interesting stuff and send it my way, so I quickly clicked over at read Jeff’s piece on sponsored content and media as a service. I’m going to leave the latter unturned as I find myself spending much less time thinking about the broader state of the media since starting Percolate two-and-a-half years ago. But the former, sponsored content, is clearly a place I play and was curious to see what Jarvis thought.

Quickly I realized he thought something very different than me (which, of course, is why I’m writing a blog post). Mostly I started getting agitated right around here: “Confusing the audience is clearly the goal of native-sponsored-brand-content-voice-advertising. And the result has to be a dilution of the value of news brands.” While that may be true in advertorial/sponsored content/native advertising space, it misses the vast majority of content being produced by brands on a day-to-day basis. That content is being created for social platforms like Facebook, Twitter, Instagram, and the such by brands who have acquired massive audiences, frequently much larger than the media companies Jarvis is referring to. Again, I think this exists outside native advertising, but if Jarvis is going to conflate content marketing and native advertising, than it seems important to point out. To give this a sense of scale the average brand had 178 corporate social media accounts in January, 2012. Social is where they’re producing content. Period.

Second issue came in a paragraph about the scalability of content for brands:

Now here’s the funny part: Brands are chasing the wrong goal. Marketers shouldn’t want to make content. Don’t they know that content is a lousy business? As adman Rishad Tobaccowala said to me in an email, content is not scalable for advertisers, either. He says the future of marketing isn’t advertising but utilities and services. I say the same for news: It is a service.

Two things here: First, I agree that the current ways brands create content aren’t scalable. That’s because they’re using methods designed for creating television commercials to create 140 character Tweets. However, to conclude that content is the lousy business is missing the point a bit. Content is a lousy business when you’re selling ads around that content. The reason for this is reasonably simple: You’re not in the business of creating content, you’re in the business of getting people back to your website (or to buy your magazine or newspaper). The whole letting your content float around the web is great, but at the end of the day no eyeballs mean no ad dollars. But brands don’t sell ads, they sell soap, or cars, or soda. Their business is somewhere completely different and, at the end of the day, they don’t care where you see their content as long as you see it. What this allows them to do is outsource their entire backend and audience acquisition to the big social platforms and just focus on the day-to-day content creation.

Finally, while it’s nice to think that more brands will deliver utilities and services on top of the utilities and services they already sell, delivering those services will require the very audience they’re building on Facebook, Twitter, and the like to begin with.

July 29, 2013 // This post is about: , , , , ,

Technology Still Isn’t Ruining Anything

This is three years old, but I just ran across it and it’s just as relevant today as it was then. Apparently in response to Nicholas Carr’s book The Shallows, Steven Pinker wrote a great op-ed about how technology isn’t really ruining all that stuff that technology is constantly claimed to be ruining. A snippet:

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

I try to post stuff like this whenever I see it because these sorts of arguments (the one from Carr that media is ruining our brains) drive me totally insane. Pinker, it’s clear, is someone Adam Gopnik would call an ever-wasser: “The Ever-Wasers insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others–that something like this is going on is exactly what makes it a modern moment.”

Also, while we’re on the topic, XKCD had a pretty awesome comic taking down those stricken with nostalgia for a techless world.

July 15, 2013 // This post is about: , , ,

Being Part of the Story

Yesterday morning I laid in bed and watched Twitter fly by. It was somewhere around 7am and lots of crazy things had happened overnight in Boston between the police and the marathon bombers. I don’t remember exactly where things were in the series of events when I woke up, but while I was watching the still-on-the-loose suspect’s name was released for the first time. As reports started to come in and then, later, get confirmed, people on Twitter did the same thing as me: They started Googling.

As I watched the tiny facts we all uncovered start to turn up in the stream (he was a wrestler, he won a scholarship from the city of Cambridge, he had a link to a YouTube video) I was brought back to an idea I first came across in Bill Wasik’s excellent And Then There’s This. In the book he posits that as a culture we’ve become more obsessed with how a things spreads than the thing itself. He uses the success of Malcolm Gladwell’s Tipping Point to help make the point:

Underlying the success of The Tipping Point and its literary progeny [Freakonomics] is, I would argue, the advent of a new and enthusiastically social-scientific way of engaging with culture. Call it the age of the the model: our meta-analyses of culture (tipping points, long tails, crossing chasms, ideaviruses) have come to seem more relevant and vital than the content of culture itself.

Everyone wanted to be involved in “the hunt,” whether it was on Twitter and Google for information about the suspected bomber, on the TV where reporters were literally chasing these guys around, or the police who were battling these two young men on a suburban street. Watching the new tweets pop up I got a sense that the content didn’t matter as much as the feeling of being involved, the thrill of the hunt if you will. As Wasik notes, we’ve entered an age where how things spread through culture is more interesting than the content itself.

To be clear, I’m not saying this is a good or a bad thing (I do my best to stay away from that sort of stuff), but it’s definitely a real thing and an integral part of how we all experience culture today. When I opened the newspaper this morning it was as much to see how much I knew and how closely I’d followed as it was to learn something new about the chase. After reading the cover story that recounted the previous day’s events I turned to Brian Stetler’s appropriately titled News Media and Social Media Become Part of a Real-Time Manhunt Drama.

April 20, 2013 // This post is about: , , ,

Explorers vs. Explainers

I’ve written in the past about how a big part of what separated McLuhan from the rest of the pack was his ability to separate his morals from his observations. Well, I particularly liked this explanation of McLuhan’s approach from the introduction to the newest edition of The Gutenberg Galaxy: “We have to remember that Marshall McLuhan portrayed himself as an explorer and not as an explainer of media environments.”

February 16, 2013 // This post is about: , ,

Seeping Media

In this essay about McLuhan’s Gutenberg Galaxy is a pretty good summation of his approach to media theorizing:

While book-lovers sometimes deride the blog/tweet/Facebook post/text message/YouTube video/surfing/gaming/Skyping world we’ve created, I don’t think proclaiming it right or wrong, or better or worse, is useful. I prefer McLuhan’s approach which is simply to ask: how far has new media seeped into popular consciousness?

January 30, 2013 // This post is about: , , ,

Experimenting on Input & Output

There’s something magical about the first few moments of a new medium, as people experiment and try to figure out what it’s all about. It’s a period of uncertainty as a small group of people fumble with new technology and it’s fun to watch. Go back and read early Tweets or look at early Instagram photos and you get the equivalent of tapping the mic to see if it’s on.

I say this because I stumbled onto Vinepeek this morning, which shows a continuous stream of new Vines from Twitter. (For the uninitiated  Vine is a new product Twitter announced that lets people make 6-second looping videos.) Watching Vinepeek, I got to thinking that there was something really fascinating with combining a new technology people are getting acquainted to with an API that people can make experimental outputs of. It’s like letting people play with the input and the output at the same time, and in the case of Vinepeek you get a very odd thing that feels like a little TV network that peeks into people’s lives.

I’m sure it won’t be interesting in a few days, but there’s a real magic to combining experimentation on creation and distribution at the exact same time.

January 27, 2013 // This post is about: , , ,

The Science of Trolls

Mother Jones has a short piece about the effects of “negative consequences of vituperative online comments for the public understanding of science” (aka comment trolling):

The researchers were trying to find out what effect exposure to such rudeness had on public perceptions of nanotech risks. They found that it wasn’t a good one. Rather, it polarized the audience: Those who already thought nanorisks were low tended to become more sure of themselves when exposed to name-calling, while those who thought nanorisks are high were more likely to move in their own favored direction. In other words, it appeared that pushing people’s emotional buttons, through derogatory comments, made them double down on their preexisting beliefs.

Because I can’t really let anything get away without being some sort of McLuhan reference, the conclusion pretty clearly lays out the fact that the medium is shaping the message we receive:

The upshot of this research? This is not your father’s media environment any longer. In the golden oldie days of media, newspaper articles were consumed in the context of…other newspaper articles. But now, adds Scheufele, it’s like “reading the news article in the middle of the town square, with people screaming in my ear what I should believe about it.”

January 12, 2013 // This post is about: , , , ,

Idiots & Taking the Long View

I’ve been listening to a lot of podcasts lately, and one of them is New Yorker’s Out Loud. The last episode featured a great interview with Daniel Mendelsohn, a literary critic. In the podcast he mostly talks about the books that inspired him to become a writer, but then, towards the end, he talks a bit about the job of a cultural critic and I thought what he had to say was interesting enough to transcribe and share:

We now have these technologies that simulate reality or create different realities in very sophisticated and interesting ways. Having these technologies available to us allows us to walk, say, through midtown Manhattan but actually to be inhabiting our private reality as we do so: We’re on the phone or we’re looking at our smartphone, gazing lovingly into our iPhones. And this is the way the world is going, there’s no point complaining about it. But where my classics come in is I am amused by the fact our word idiot comes from the Greek word idiotes, which means a private person. It’s from the word idios, which means private as opposed to public. So the Athenians, or the Greeks in general who had such a highly developed sense of the readical distinction between what went on in public and what went on in private, thought that a person that brought his private life into public spaces, who confused public and private was an idiote, was an idiot. Of course, now everybody does this. We are in a culture of idiots in the Greek sense. To go back to your original question, what does this look like in the long run? Is it terrible or is it bad? It’s just the way things are. And one of the advantages about being a person who looks at long stretches of the past is you try not to get hysterical, to just see these evolving new ways of being from an imaginary vantage point in the future. Is it the end of the world? No, it’s just the end of a world. It’s the end of the world I grew up in when I was thinking of how you behaved in public. I think your job as a cultural critic is to take a long view.

I obviously thought the idiot stuff was fascinating, but also was interested in his last line about the job of a cultural critic, which, to me, really reflected something that struck me about McLuhan in the most recent biography of his by Douglas Coupland:

Marshall was also encountering a response that would tail him the rest of his life: the incorrect belief that he liked the new world he was describing. In fact, he didn’t ascribe any moral or value dimensions to it at all–he simply kept on pointing out the effects of new media on the individual. And what makes him fresh and relevant now is the fact that (unlike so much other new thinking of the time) he always did focus on the individual in society, rather than on the mass of society as an entity unto itself.

January 7, 2013 // This post is about: , , , , , ,

The Curation Debate

All week I’ve been meaning to weigh in on the curation debate (David Carr, Matt Langer, Marco Arment, Matthew Ingram), but I’ve been busy and Percolate released its own take on the subject in the form of a video with some of our favorite web curators.

Okay, let me start at the top: Semantics. Matt Langer rightly points out the word curation is not being used correctly:

First, let’s just get clear on the terminology here: “Curation” is an act performed by people with PhDs in art history; the business in which we’re all engaged when we’re tossing links around on the internet is simple “sharing.” And some of us are very good at that! (At least if we accept “very good” to mean “has a large audience.”)

Early last year I agreed. But then I realized how boring and unproductive most semantic arguments were. Or as Maria Popova said last June:

Like any appropriated buzzword, the term “curation” has become nearly vacant of meaning. But, until we come up with a better one, it remains the semantic placeholder that best captures the central paradigm of Twitter as a conduit of discovery and direction for what is meaningful, interesting and relevant in the world.

I loved the idea of a semantic placeholder then, and I still do. If you’re going to wade into the semantic debate you need a better answer and editor isn’t it. For better or worse we are using curator to mean something different than it used to mean and, at least for now, that seems fine. As long as we all know what we’re talking about (the selection of internet things) then the word seems okay, let’s not hide behind the definition.

And before I continue, one more thing: For what it’s worth I define curation as people choosing things and aggregation as computers choosing things.

Great. Now back to the more important stuff. A lot of this conversation was kicked off by the Curator’s Code, which aims to encourage people to share the source of their information with some special symbols. Lots of folks, including Marco from Instapaper jumped on the idea as stupid and unsustainable and maybe it is. I think everyone involved would agree it’s not the perfect solution to the problem, but I do think it opened up an important conversation (I wasn’t involved, but I know the folks who are). How we credit one another on the web is an issue we’ve been working on forever and, as a few of the blog posts on the topic point out, the good news is that the hyperlink is the most efficient:

And we already have a tool for providing credit to the original source: It’s called the hyperlink. Plenty of people don’t use the hyperlink as much as they should (including mainstream media sources such as the New York Times, although Executive Editor Jill Abramson said at SXSW that this is going to change) while others misuse and abuse them. But used properly, they serve the purpose of providing credit quite well. How to use them properly, of course — especially for journalistic purposes — is another whole can of worms, as Felix Salmon of Reuters and others have noted. And when it comes to curation and aggregation, it seems as though curation is what people call it when they like it, and aggregation is what they call it when they don’t.

But it’s not quite good enough, and this is where I start to take issue with a few different things a few different people said. What I just did there is use a hyperlink to credit something I didn’t write. Except you probably didn’t mouse over the hyperlink and because it was in there I didn’t need to write that Matthew Ingram from GigaOm was responsible for those sentences. While I think it’s important to credit sources of information, I think the bigger thing to think about is how we’re crediting the original sources of content.

Which is why I took the most issue with Marco’s stance. Not because I disagreed with him (“The proper place for ethics and codes is in ensuring that a reasonable number of people go to the source instead of just reading your rehash.”), but because Instapaper represents one of the current dangers in lack of credit. While it doesn’t relate exactly to the question the Curator’s Code is addressing, it is part of the broader conversation we should be having: Who is getting credit when you consume a great piece of content?

After a long argument with Thierry Blancpain on Twitter I finally came to the question which seems to sit at the heart of the matter to me: Who gets credit when you read something awesome in Instapaper? Does it go to the publisher of the content or does it go to Instapaper. I know for myself (and the informal poll of friends I asked the question to), the answer is the latter. I don’t know the source of most of the content I consume in Instapaper. Sure I put it there when I hit the button, but when I consume it the source is entirely stripped away. I was talking to the publisher of a major magazine this week about the issue and the question I asked is, “if you’re losing the advertising and the branding, is there any purpose to letting your content live there?”

This isn’t to point the finger solely at Instapaper, I think this is true of almost all the platforms on the web. If all the incentive is towards sharing and all the credit goes to sharers, what will happen to creation? (I don’t really think it will go away, but I do think it creates a dangerous precedent.) One of the things I think is great about the Longform iPad app is that it connects me with the publishers of content. One day when they offer subscriptions (which I assume they will) I’d happily pay to keep getting my 3,000 word Grantland stories as I now know the true value (and I never forget it, because the publisher is always right next to the content). (Admittedly, the curators on the app pose a more complicated issue.)

I think part of it is that publishers are going to have to start carrying more branding in the stories. I’m not sure what this means, but if you’re reading something from The Atlantic, say, maybe they remind you throughout that this is from The Atlantic. It’s not ideal, but again, I think if publishers aren’t getting advertising revenue or branding credit with their stories there is no reason for them to support their travels around the web. I also think metadata comes into play, and while I don’t know what the best answer is quite yet, I think it’s important to start encouraging the display of more information about original sources on stories (again, not sure what that looks like, but I’ve been turning it over in my head).

This whole issue is obviously something I’m thinking a lot about at Percolate. I believe brands should be the best behaved of the bunch. I also believe brands have a responsibility to be both curators and creators: To increase the pool of original quality content on the web. No one is to blame for all this stuff, but we are all responsible to make sure that it’s solved before it’s too late.

March 18, 2012 // This post is about: , , ,

Accepting the Uncomfortable

Was just reading through some old Instapaper stuff and ran across this post celebrating what would have been Marshall McLuhan’s 100th birthday. It includes an excerpt of an interview McLuhan did with Playboy and this excellent explanation of McLuhan’s approach:

I’m not advocating anything; I’m merely probing and predicting trends. Even if I opposed them or thought them disastrous, I couldn’t stop them, so why waste my time lamenting? As Carlyle said of author Margaret Fuller after she remarked, “I accept the Universe”: “She’d better.” I see no possibility of a worldwide Luddite rebellion that will smash all machinery to bits, so we might as well sit back and see what is happening and what will happen to us in a cybernetic world. Resenting a new technology will not halt its progress.

I’ve written about this in the past, but I think one of the things that amazes me most about McLuhan was his ability to separate himself so well. The things he said that people struggle with most (like content doesn’t matter as much as the medium) they struggle with because it makes them uncomfortable as content creators (I’m speaking from personal experience here). Even reading what he said above makes me feel a bit uncomfortable as it feels like one shouldn’t just accept things … But who am I to say?

February 29, 2012 // This post is about: ,

Location-Based Media

Gizmodo has a glowing review of Pinwheel, Caterina Fake’s new company (Fake was co-founder of Flickr and Hunch). From the sounds of it (and the big photo at the top of the Gizmodo post) it’s a way to annotate the world around you. This is an interesting idea to me for a bunch of reasons. First, way back in the day I was fascinated by Yellow Arrow which was a version of this idea before people had smartphones with internet and GPS. You stuck an arrow sticker up and it had a unique SMS code that you could use to get whatever data was associated with it. Looking back in the archives, I guess I first spotted Yellow Arrow in 2004 in Wired and wrote more about them and Flickr and annotation a year later in 2005 (oh blogs, aren’t they grand).

Anyway, back to Pinwheel. About a year ago I was having a conversation with a friend of mine about how I thought someone needed to start a location-based media company. As more and more services became about understanding your location and giving you valuable information based on this it seemed logical that someone would start to focus on the creation of media with coordinates attached to it. I was thinking of this not as something that was competitive with Foursquare and the like, but rather as the natural extension: Location is a platform, Foursquare is an API for it and this is one of the businesses that will be built upon it.

Having never played with Pinwheel it’s hard to say where it fits in that location stack (does it compete with Foursquare or complement it), but it’s interesting to see stuff starting to pop up like this and I hope it’s awesome and interesting and pulls in data from outside just what people add directly to it.

February 18, 2012 // This post is about: , , ,

Highly Variable Product

Felix Salmon has a good rundown on how Elizabeth Spiers has succeeded at the New York Observer. I thought his summation of online content was especially interesting (and somewhat sad):

And so, in the proud tradition of good blogs everywhere, readers are left with a highly variable product. The great is rare; the dull quite common. But — and this is the genius of the online format — that doesn’t matter, not any more, and certainly not half as much as it used to. When you’re working online, more is more. If you have the cojones to throw up everything, more or less regardless of quality, you’ll be rewarded for it — even the bad posts get some traffic, and it’s impossible ex ante to know which posts are going to end up getting massive pageviews. The less you worry about quality control at the low end, the opportunities you get to print stories which will be shared or searched for or just hit some kind of nerve.

February 6, 2012 // This post is about: , ,

Setups

It’s always interesting to see how smart people get their job done, which is why I like The Setup and Atlantic Wire’s Media Diet feature. The former asks interesting people – mostly engineers – about the hardware/software they use on a daily basis, while the latter digs into the media habits of some of the most successful journalists around (last week was Andrew Ross Sorkin). Beyond getting interesting tips for software and new Twitter feeds to follow, what’s so great about these things is that it recognizes the roll of outside tools and influences in the lives of successful people. It’s a good thing to remember.

November 28, 2011 // This post is about: , ,

Being a Market Leader

Felix Salmon expands on some of the stuff I wrote the other day about brands as publishers. Specifically, he points to an interesting example I didn’t know about in the Gates Foundation. The non-profit gave the Guardian a $2.5 million grant to suppor the Guardian’s global development microsite for three years. Felix explains:

The Gates Foundation actually launched the site in 2010, spending an undisclosed sum to do so; the new grant keeps the site going for another three years. As part of the deal, every page in the site — be it blog post or news story — gets prominently branded with the Gates Foundation logo, right at the top of the column where all the editorial content goes. (In fact, the logo is significantly larger than the Guardian’s own logo at the top of the page, although the site looks and feels like the rest of the Guardian site, and lives at guardian.co.uk.)

At the end of the post Felix asks a few questions, including what does the Gates foundation get out of an arrangement like this? I’ve got a guess, which is they get more awareness around the issues. That sounds like a bit of a throwaway answer, but the Gates Foundation is an interesting position as a brand: They are a market leader. When you’re a market leader your goal becomes less about building your own position and more about building the category.

Take BabyCenter as an example. Johnson & Johnson dominates the baby category. Last time I heard their marketshare was up above 50 percent. Their objective with marketing is less about displacing the competition and more about building the market: They want parents to take more “care” of their babies by buying more products. If they take just their regular percentage of the new market it’s a big deal.

I’ve written about it in the past, but Google is one of my favorite examples of a market leader marketer. Their dominance in search makes in inefficient to try to steal share from competitors (how will they even find the small percentage of people who use Bing?). Instead, they spend money growing the category with products like Android and Chrome. Here’s what I wrote about the strategy in 2009:

What that means is everything Google does is about getting more people to use the internet more. Use Android as an example: It is absolutely in Google’s best interest to release a mobile OS that makes it easy to browse the web because that means more people using the internet more which means more searches on Google (because of that market dominance) which means more clicks on the paid ads. Voila, you’re rich.

I suspect Gates thinks about the approach in a very similar way. The more people are thinking about these issues, the more effective they can be in enacting the change they are pushing for. I’m actually surprised they bothered with the branding on the pages, though it likely makes the Guardian much more comfortable.

The broader question, which Felix seems to be getting at, is what can we learn from programs like this and is there a model here for media companies? I suspect the answer is yes, though the first thing we need to figure out is how to apply the model to non-market leaders. When you’re promoting a lifestyle, idea or category you lead, it’s easy to see how getting people to think about it more makes sense. If you’re a brand who isn’t in that situation (most), how do you build value in a similar way?

November 19, 2011 // This post is about: , , , , , ,

A Look Back at Fukishima

IEEE Spectrum has a really good step-by-step look back on everything that happened at Fukishima in the hours and days after the earthquake/tsunami. This sort of reporting is really interesting if for no other reason than it’s generally really hard to find. For all the coverage you watched and read in the days and weeks that followed the disaster, the dropoff on any story like that happens fast. For all the talk about the public’s declining attention span, the media is just as bad. I mentioned this a few years ago, but I still think often about this quote from the 2008 Pew State of the Media report:

Rush Limbaugh’s reference to the mainstream press as the “drive-by” media may be an ideologically driven critique, but in the case of several major stories in 2007, including the Virginia Tech massacre, the media did reveal a tendency to flood the zone with instant coverage and then quickly drop the subject. The media in 2007 had a markedly short attention span.

November 8, 2011 // This post is about: , , , , ,

Apps on TVs

There’s been some acceptance that Apple would get into the TV market for the last five years and the fires were only fanned with a quote from the new Steve Jobs biography about how he had “cracked” the problem. John Gruber and Jason Kottke think the Jobsian solution looks like apps, not channels:

Letting each TV network do their own app allows them the flexibility that writing software provides. News networks can combine their written and video news into an integrated layout. Networks with contractual obligations to cable operators, like HBO and ESPN, can write code that requires users to log in to verify their status as an eligible subscriber.

Over the last few weeks I’ve been singing the praises of the Watch ESPN app to anyone who will listen. With your cable credentials (well mine at least), you’re able to sign in and watch ESPN, ESPN2 and a whole bunch of other content that didn’t make it to a numbered channel. It’s a great and somewhat peculiar experience. After just a few minutes of watching SportsCenter you notice two big things. First, there are no commercials, they just say “commercial break” and show nothing. Second, there is no MLB content. When they went to baseball highlights (a big SportsCenter topic over the last few weeks), the screen went blank again just like it did during a commercia (sometimes it just showed the score or got blurry). I’m assuming because of MLB.com, Major League Baseball controls the exclusive internet streaming rights. It’s not a dealbreaker for me, as I’m a football/NASCAR man, but it does speak to the complications of the television industry, which Dan Frommer wraps up nicely in a response to Gruber’s post:

For the networks, not pissing off the cable guys means staying away from putting too much digital video on TV sets, especially for free. iPhone and iPad apps aren’t as bad. And yes, the geeks among us have been plugging their laptops into their TVs for years. But putting stuff on a TV set in a way that’s easy for normal people to access — and in a way that competes with traditional TV — is still a no-no for most networks. Especially the ones that are more dependent on affiliate fees, or hope to make the argument for higher affiliate fees in the future.This is one reason that TV networks have blocked Google TV from accessing their content. And why many iPad video apps don’t let you beam the video to your Apple TV via AirPlay.

I really like it when people lay out the realities of a business for the world. Often we hear about how broken the television industry is, but if you’re a cable company things are pretty peachy. Sure you are fighting against putting too much content on the web and pissing off the digirati by blocking your content from Google TV, but you don’t care much because you get paid truckloads of money for absolutely nothing. How many other businesses are there on the planet where you get paid regardless of whether someone has any interest in ever interacting with your product. Sure this will change, and no company has done a better job over the past 15 years at pushing industries with seemingly unbreakable business models into a new way of thinking (music and mobile), but television will be especially tough because of both the economics and Apple’s past success. Or, as Frommer puts it:

The people running TV networks are not dummies. They may be slow to adopt new technology, but they’re not stupid. They saw what “working with Apple” did to the music industry. And they are set on making sure that if Internet distribution and new technologies eventually redraw the entire TV distribution chain, it happens on their terms and on their schedule.

Okay, enough writing about Apple. Back to regularly scheduled internettery.

November 2, 2011 // This post is about: , , ,