This whole article about the issues with saving data over long periods of time is good, but I especially liked the nugget at the end of this paragraph from Bruce Sterling:
LAST spring, the Harry Ransom Center at the University of Texas acquired the papers of Bruce Sterling, a renowned science fiction writer and futurist. But not a single floppy disk or CD-ROM was included among his notes and manuscripts. When pressed to explain why, the prophet of high-tech said digital preservation was doomed to fail. “There are forms of media which are just inherently unstable,” he said, “and the attempt to stabilize them is like the attempt to go out and stabilize the corkboard at the laundromat.”
[Editor’s note: After writing my post about imagining the future the other day my friend Martin emailed me the following response. I thought it was super interested and asked if he minded me posting it here. He agreed and here it is.]
Per your “floating future” post – been thinking a lot about this idea lately, where has all the big thinking gone. Been thinking about it for a couple of reasons. One has to do with World War II.
On the one hand, I’ve been helping my son study post World War II America in his sophomore social studies class. And on the other, I’ve been reading a fair amount of late-40s-through-50s fiction (Cheever, Roth, Salinger, Updike, etc.). And I’ve been amazed (probably due to my ignorance) how long a shadow that war cast over the subsequent decade, on a very personal level.
I’ve been wondering if exposure to such massive, global thinking during the war made “big” thinking possible in the 1950s in a way it hadn’t been before. Think about it; guys from little towns across America – guys who had probably never been out of their state, let alone out of the country – were suddenly involved in supply chains that literally ran around the world. Guys who had never seen more than a hundred or two hundred people together at a time, suddenly involved in battles involving thousands and thousands of people, from dozens of countries, with machines that had been invented expressly for these purposes.
I have to believe that got them thinking that anything – or a whole lot more – was possible. And I don’t think the mass of folks today are exposed to that.
The other thing that this makes me think about is the negative effects of the rush to monetization. If you have to make something pay right now (or in the next 3, 4, 5 years, etc.), you have to think differently about it than if you just open your head and think “what if?”. For all the complaints about companies like facebook and twitter and groupon that didn’t turn a profit, you have to admit that they are bigger ideas than ones that could cash in (that is, generate revenue) quicker. I wonder if that’s because monetization, to be believable, has to be based on the here and now – economic realities that exist currently – and that really big thinking, real game-changing stuff, relies on economics and realities that haven’t happened yet.
Or, to use a sports metaphor, in soccer, the great goals are often scored when someone passes to the space that someone is running to, not to where he is.
Yesterday I posted a link to the Michael Lewis profile/review of Daniel Kahneman’s new book. Since yesterday I’ve repeated this little nugget on how Kahneman discovered behavioral economics three times and thought it was worth sharing:
I can still recite its [a 1970s paper on the psychological assumptions of economic theory] first sentence: “The agent of economic theory is rational, selfish, and his tastes do not change.”
I was astonished. My economic colleagues worked in the building next door, but I had not appreciated the profound difference between our intellectual worlds. To a psychologist, it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.
It’s a nice way to think about the difference between psychology and economics.
Most people won’t ever touch Amazon’s cloud computing service. They will, however, touch an application that touches the service (FourSquare, Reddit, Percolate to name a few). What Amazon offers developers is the ability to bring up and down a server in an instant, only paying for the time it was live (for the initialized, this is all thanks to virtualization, which is pretty amazing). The other really neat thing about what Amazon offers is that they have a ton of server images to choose from when you launch your new box. That means in addition to the size and speed you can choose from different operating systems and even very specific configurations with additional software pre-installed (for instance, there’s a WordPress image that comes with all the software one would need to run a blog on Amazon’s cloud).
Anyway, some researchers looked into the security of these images and things didn’t turn out so peachy:
The results, which the team plans to present a paper at the Symposium on Applied Computing next March, aren’t pretty: 22% of the machines were still set up to allow a login by whoever set up the virtual machine’s software–either Amazon or one of the many other third party companies like Turnkey and Jumpbox that sell preset machine images running on Amazon’s cloud. Almost all of the machines ran outdated software with critical security vulnerabilities, and 98% contained data that the company or individual who set up the machine for users had intended to delete but could still be extracted from the machine.
IEEE Spectrum has a really good step-by-step look back on everything that happened at Fukishima in the hours and days after the earthquake/tsunami. This sort of reporting is really interesting if for no other reason than it’s generally really hard to find. For all the coverage you watched and read in the days and weeks that followed the disaster, the dropoff on any story like that happens fast. For all the talk about the public’s declining attention span, the media is just as bad. I mentioned this a few years ago, but I still think often about this quote from the 2008 Pew State of the Media report:
Rush Limbaugh’s reference to the mainstream press as the “drive-by” media may be an ideologically driven critique, but in the case of several major stories in 2007, including the Virginia Tech massacre, the media did reveal a tendency to flood the zone with instant coverage and then quickly drop the subject. The media in 2007 had a markedly short attention span.
Football manager is an interesting position. In Europe the job wraps up what is two positions in the United States: Coach and GM. The big difference between a manager of a european football club and the coach of a US football team is final say over personnel decisions. In the US a coach has a say, sure, but it’s the GM who is really making the decision. Obviously that makes the European job much different, more strategic and, probably, harder.
Which makes it all the more impressive that Sir Alex Ferguson, the Manchester United manager, has been at the helm of one of the world’s most successful sports franchises for 25 years. In the US, the average tenure of a coach in one of the four major sports is right around 3 seasons and althought I’m having trouble tracking down good numbers for European football at the moment I have no reason to believe it’s any longer (especially with the addition of relegation, which is one of the more brilliant things in sports).
Anyway, here’s how the article explains Ferguson’s success:
Shuffling his backroom pack has given Ferguson a fresh pair of eyes to see United through and also prevented players, in particular the longer-serving ones, from going stale on the training ground. New ideas, combined with players willing to adapt to them, are essential for the top clubs. Manchester United have not played in the same style for these 25 years; they have bought new players to adapt to new systems, sometimes to pull further away from their counterparts and sometimes to narrow a gap. This season’s style is different again and, in terms of their pressing game, has parallels with the way Barcelona try to win the ball back.
One of the things that always strikes me about NFL coaches (I know the NFL better than any of the other sports leagues) is that they always bring a system with them. In the case of the Chicago Bears and Lovie Smith it’s the cover-2 defense. There are those coaches that bring offensive systems as well, but seldom do you hear about a coach who is adapting their system to the talent on the roster. It sounds like this is exactly what Ferguson has done and, as a result, has helped him keep his gig (I’m sure lots of football fans would argue extraordinary amounts of money to spend on players had something to do with it as well … but Joe Torre still got fired).
I like this thought from Seth Godin on the problem with minimum viable product:
There’s a burst of energy and attention and effort that accompanies a launch, even a minimally viable one. If there’s a delay in pick up from the community, though (see #1) it’s easy to move on to the next thing, the next launch, the next hoopla, as opposed to doing the insanely hard work of sticking with that thing you already launched.
I have a bunch of issues with the conversation around lean and minimum viable product (probably the biggest of which is any ideology that people get religious about seems a bit scary). The biggest issue I have, though, is that it seems inherently about building products, not companies. There’s nothing wrong with that if you’re building a little something on the side, but if you’re building a company you have a whole bunch of other things you need to also be thinking about, not the least of which is whether you’re building a product you’re excited about and a company you actually want to work at. I’ve never heard anyone mention either of these as part of the product development conversation and it makes me sad.
Via Ian Sohn
I like coffee quite a bit (I have three coffee preparation devices on my kitchen counter) and I’ve always been under the impression that freezing coffee was a bad idea. Turns out, according to this incredibly detailed experiment, it pretty hard to tell the difference if frozen for less than two months after roasting:
When the results were examined according to the three scored parameters, the overall preference, the crema, and the intensity of the taste and aroma, no statistically significant differences were noted among the coffees studied or the other variables of the study. What this means is that none of the tasters could consistently differentiate among the shots made with previously frozen or never frozen coffee. Similarly, none of the tasters could consistently tell the difference based upon whether the shots came out of the newer rotary pump driven or the older vibratory pump driven espresso machine, nor between the two grinders, one of which had brand new burrs and the other with more heavily used burrs.
In case you were worried this wasn’t taken seriously enough, here are the storage instructions:
If you are concerned about what sort of container you should use for freezing coffee, it obviously needs to be something that is relatively airtight and that can tolerate the conditions present in a freezer, and the temperature stress in going from room temperature to very cold and back again to room temperature. I generally use Mason type canning jars or recycled jars from grocery products that will close with a tight seal; I fill them up as full as possible to minimize the remaining air that is present. I have also used certain types of commercial plastic coffee bags that can be sealed and if valves are present I tape over them. If you purchase coffee that is already packaged in a sturdy valve bag you could simply tape over the valve and toss it directly into the freezer. I would however suggest that whatever container you choose, it be sized to allow you to consume all of the contents within a reasonable period, say 1 week, without having to open the bag and return some of the contents to the freezer; doing so risks condensation on the beans which could theoretically cause damage.
This all came via an excellent Lifehacker post busting food myths. Also, if you’re still reading, I’ll assume you like coffee and suggest you check out the cold-brewed iced coffee recipe I posted last year.
Fast Company posted an interesting infographic from the folks at Help Remedies documenting the insanity that is the pharmacy, specifically the headache medication aisle. The article explains:
Each of the myriad offerings laid out, whether its gel-caps or something else, was intended to produce a slight edge on a tightly packed, insanely competitive store shelf where virtually identical products can be found just an inch away. As drug makers compete for more and more differentiation, what you get is simply overwhelming. An innovation process that started with the original intention of offering better products leads to anoverall product experience that’s horrible.
Which immediately reminded me of a quote I found when I was working on that innovation presentation. It’s from a very good Harvard Business Review article from 1980 titled “Managing Our Way to Economic Decline”:
Inventors, scientists, engineers, and academics, in the normal pursuit of scientiﬁc knowledge, gave the world in recent times the laser, xerography, instant photography, and the transistor. In contrast, worshippers of the marketing concept have bestowed upon mankind such products as new- fangled potato chips, feminine hygiene deodorant, and the pet rock….
I don’t think it’s quite this simple, but the ebb and flow of markets like this is really interesting. Help’s take is that you need less choice, not more, and they seek to simplify the conversation. But clearly at some point the conversation was simple (it had to start somewhere). I wonder where the turning point is in a category: When does the variety of products for different use-cases start to hurt overall sales? Or maybe it doesn’t, maybe all the specialized products only serve to strengthen the leading brand when confused consumers turn to what they know. (I’m sure someone with experience in this sort of CPG knows the answer to it.)
After posting the other William Gibson quote about the difficulty we have in imagining the past I wasn’t sure whether I should post a second quote from his very long interview with the Paris Review. However, now that Kevin Kelly has riffed on it, I feel I have no choice (it’s Kevin Kelly talking about William Gibson, what else is a geek to do?). First, Gibson’s quote:
There’s an idea in the science-fiction community called steam-engine time, which is what people call it when suddenly twenty or thirty different writers produce stories about the same idea. It’s called steam-engine time because nobody knows why the steam engine happened when it did. Ptolemy demonstrated the mechanics of the steam engine, and there was nothing technically stopping the Romans from building big steam engines. They had little toy steam engines, and they had enough metalworking skill to build big steam tractors. It just never occurred to them to do it. When I came up with my cyberspace idea, I thought, I bet it’s steam-engine time for this one, because I can’t be the only person noticing these various things. And I wasn’t. I was just the first person who put it together in that particular way, and I had a logo for it, I had my neologism.
In Kelly’s words:
When it is steam-engine-time, steam engines will occur everywhere. But not before. Because all the precursor and supporting ideas and inventions need to be present. The Romans had the idea of steam engines, but not of strong iron to contain the pressure, nor valves to regulate it, nor the cheap fuel to power it. No idea – even steam engines — are solitary. A new idea rests on a web of related previous ideas. When all the precursor ideas to cyberspace are knitted together, cyberspace erupts everywhere. When it is robot-car-time, robot cars will come. When it is steam-engine-time, you can’t stop steam engines.
This makes me think of two things: First, it kind of changes the whole thought of the inventor. They’re no longer this solitary player who has an “aha moment,” but rather part of the network of ideas that is the current time. The inventor makes a few connections within the network and they’ve got this new thing that never could have happened without all these other circumstances to assist their creation.
With that said, my second thought is that maybe my first thought is all wrong and this has to do much more with the distinction between invention and innovation. Economist Josef Schumpeter wrote this in his book The Theory of Economic Development:
Economic leadership in particular must hence be distinguished from “invention.” As long as they are not carried into practice, inventions are economically irrelevant. And to carry any improvement into effect is a task entirely different from the inventing of it, and a task, moreover, requiring entirely different kinds of aptitudes. Although entrepreneurs of course may be inventors just as they may be capitalists, they are inventors not by nature of their function but by coincidence and vice versa. Besides, the innovations which it is the function of entrepreneurs to carry out need not necessarily be any inventions at all. It is, therefore, not advisable, and it may be downright misleading, to stress the element of invention as much as many writers do.
It seems more likely that steam engine time is not so much about invention, but rather innovation: The idea that ideas come to life when the network is in place to support them and generally the people that win are the ones that align the pieces correctly, not necessarily the ones who create the new widget. Maybe a small distinction, but it seems like an important one.
There’s been some acceptance that Apple would get into the TV market for the last five years and the fires were only fanned with a quote from the new Steve Jobs biography about how he had “cracked” the problem. John Gruber and Jason Kottke think the Jobsian solution looks like apps, not channels:
Letting each TV network do their own app allows them the flexibility that writing software provides. News networks can combine their written and video news into an integrated layout. Networks with contractual obligations to cable operators, like HBO and ESPN, can write code that requires users to log in to verify their status as an eligible subscriber.
Over the last few weeks I’ve been singing the praises of the Watch ESPN app to anyone who will listen. With your cable credentials (well mine at least), you’re able to sign in and watch ESPN, ESPN2 and a whole bunch of other content that didn’t make it to a numbered channel. It’s a great and somewhat peculiar experience. After just a few minutes of watching SportsCenter you notice two big things. First, there are no commercials, they just say “commercial break” and show nothing. Second, there is no MLB content. When they went to baseball highlights (a big SportsCenter topic over the last few weeks), the screen went blank again just like it did during a commercia (sometimes it just showed the score or got blurry). I’m assuming because of MLB.com, Major League Baseball controls the exclusive internet streaming rights. It’s not a dealbreaker for me, as I’m a football/NASCAR man, but it does speak to the complications of the television industry, which Dan Frommer wraps up nicely in a response to Gruber’s post:
For the networks, not pissing off the cable guys means staying away from putting too much digital video on TV sets, especially for free. iPhone and iPad apps aren’t as bad. And yes, the geeks among us have been plugging their laptops into their TVs for years. But putting stuff on a TV set in a way that’s easy for normal people to access — and in a way that competes with traditional TV — is still a no-no for most networks. Especially the ones that are more dependent on affiliate fees, or hope to make the argument for higher affiliate fees in the future.This is one reason that TV networks have blocked Google TV from accessing their content. And why many iPad video apps don’t let you beam the video to your Apple TV via AirPlay.
I really like it when people lay out the realities of a business for the world. Often we hear about how broken the television industry is, but if you’re a cable company things are pretty peachy. Sure you are fighting against putting too much content on the web and pissing off the digirati by blocking your content from Google TV, but you don’t care much because you get paid truckloads of money for absolutely nothing. How many other businesses are there on the planet where you get paid regardless of whether someone has any interest in ever interacting with your product. Sure this will change, and no company has done a better job over the past 15 years at pushing industries with seemingly unbreakable business models into a new way of thinking (music and mobile), but television will be especially tough because of both the economics and Apple’s past success. Or, as Frommer puts it:
The people running TV networks are not dummies. They may be slow to adopt new technology, but they’re not stupid. They saw what “working with Apple” did to the music industry. And they are set on making sure that if Internet distribution and new technologies eventually redraw the entire TV distribution chain, it happens on their terms and on their schedule.
Okay, enough writing about Apple. Back to regularly scheduled internettery.
All around awesome interview with William Gibson, who seems like one of the smartest folks around. I love his answer to why he seems to romanticize articles of the past:
It’s harder to imagine the past that went away than it is to imagine the future. What we were prior to our latest batch of technology is, in a way, unknowable. It would be harder to accurately imagine what New York City was like the day before the advent of broadcast television than to imagine what it will be like after life-size broadcast holography comes online. But actually the New York without the television is more mysterious, because we’ve already been there and nobody paid any attention. That world is gone.
My great-grandfather was born into a world where there was no recorded music. It’s very, very difficult to conceive of a world in which there is no possibility of audio recording at all. Some people were extremely upset by the first Edison recordings. It nauseated them, terrified them. It sounded like the devil, they said, this evil unnatural technology that offered the potential of hearing the dead speak. We don’t think about that when we’re driving somewhere and turn on the radio. We take it for granted.
It’s sort of mind-bending, but incredibly true.
Just wanted to quickly update everyone on a few site changes. After seven years of using Movable Type to run NoahBrier.com I finally had to abandon ship after I couldn’t log in to post anymore. I spent a few days getting things ready and made the move this afternoon. For the nerdy amongst you, I wrote some scripts to make sure my old URLs don’t get killed which I will put up on Github if anyone is interested. Also, I rebuilt all the HTML with the help of HTML5 Boilerplate and Twitter Boostrap, the latter of which I’ve been blown away by (it lets you easily build out gridded sites without all the pain-in-the-ass normally involved).
On the less technical end, I’m now starting to import my posts from Percolate. If you’re on the site you’ll see them denoted by grey backgrounds. The idea is to use Percolate to keep the site updated with shorter curation posts: Linking off to the interesting content around the web with a short comment. I’m using a plugin we’ve been developing for some of our work with brands.
As always, thanks for all the support and let me know if you run into any issues. More posts coming soon, I promise.
I’m out of the agency game now, but I still think about it and obviously still have a ton of friends spread across the advertising world. One of the things I’ve been thinking about a lot lately (for the last two years really) is the rise of the “creative technologist.” In theory, at least as I understand it, creative technologists were meant to bridge the gap in understanding between the advertising world and technology, as well as help to elevate the position of engineers within agencies to the pedestal that the creative department is held at. This was all nice in theory, but there are lots of things wrong with this, not the least of which is that changing titles hardly ever actually has the deeper effect of understanding and respect that it intends. But that wasn’t all, the other big effect of the new title was that schools started creating programs that taught people to be “creative technologists,” except those people were far more creative than technologist.
And so it became that there were a lot of creative technologists around who couldn’t write a lick of code and that made me sad because there are plenty of technologists, even in agencies, who are very creative. They were creative even before they got the title and then, after they got the title, absolutely nothing changed except they got more competition for their jobs from people who couldn’t actually do their jobs.
All of this is a long introduction to Igor Clark’s long piece about how you shouldn’t hire creative technologists that can’t write code, which made me very happy inside. He talks about a lot of stuff, some micro and some macro, but generally his point is that it’s the ability to make things, really good things, that matters and hiring someone who can imagine, but not execute, is besides the point. As Igor notes (and I agree), agencies are going to struggle for awhile to figure out how to attract engineering talent, especially in the current startup climate, but to thrive they are going to have to figure out how to acquire and retain the sort of people for whom being creative and being a technologist was never a thing they needed a fancy title for, but instead was a thing they followed out of passion.
Basically I’m glad someone wrote this.
As anyone who has been reading this site for sometime knows, I’m a big fan of McLuhan, especially his thinking around the message of media. I think Jonah Peretti from BuzzFeed nails McLuhan’s point in this DigiDay interview:
The biggest difference is in a Facebook world, it’s more your reaction to content and how you interact with your friends around content than the informational value of the content. It still matters, but it’s on equal footing with the social story that unfolds around the content. It might be that a piece of content is about how Barack Obama is ahead of the polls. People on Facebook and Twitter who like Obama have a vested interest in sharing that media. People who hate Obama have an interest in hating it. Those interactions allow the content to become more important. If you’re a publisher who wants your content to spread on Facebook, you have to think of the network and not the individual.
The content matters, sure, but the reaction and other aspects of the medium it’s being shared on is at least equally important. That’s what McLuhan was getting at. Also fromt he same interview, Peretti has an interesting perspective on digital brand marketing: “Brands will put all their eggs in one basket. They’ll have one epic interactive experience at one URL. What we’ve seen is that if they think of it more like a publisher launching lots of articles, they have a lot more chance of having things take off.”
I don’t often do book reports around here, but I just got through Duncan Watts and Peter Sheridan Dodds’ paper, “Influentials, Networks, and Public Opinion Formation” [PDF] and thought it might be worth sharing some quotes and thoughts (especially since it’s 36 pages of fairly dense material).
As I wrote recently their basic thesis is that so-called “influentials” are not all they’re cut out to be (especially by people like Gladwell and Keller). Though as they explain in the conclusion, “Our main point, in fact, is not so much that the influentials hypothesis is either right or wrong, but that it’s micro-foundations, by which we mean the details of who influences whom and how, require very careful articulation in order for its validity to be meaningfully assessed.” While Watts and Dodds’ own work leaves me with some questions, this seems like a hard assertion to argue with. To come up with a true theory of influence, the details of influence need to be universally defined and understood.
In fact, I don’t know that Watts and Dodds go far enough themselves, mainly because influence is so hard to pin down. Observationally, who influences whom and how can change on a daily basis and greatly depends on things like topic & relationship (as well, I’d argue, on outside factors like how busy the recipient is at time of influence). Watts and Dodds do acknowledge these factors, however, suggesting that “large scale changes in public opinion are not driven by highly influential people who influence everyone else, but rather by easily influenced people, influencing other easily influenced people.”
This, in and of itself, doesn’t seem particularly controversial. If you go with the idea that 10% of the population is influential, that leaves 90% of the population that’s not. Then if you assume that, especially in the current media/advertising landscape, the influential 10% is hardest to reach because they are the most overexposed (and thus have their attention stretched the thinnest), it seems that your effort may be much better spent thinking about options. What’s more, according to Watts and Dodds’ research, while “influentials have a greater than average chance of triggering critical mass, when it exists … [their effect is] only modestly greater, and usually not even proportional to the number of people they influence directly.”
As they explain in their conclusion, the simplest way to understand this is to look at natural analogues such as forest fires:
Some forest fires, for examples, are many times larger than average; yet no-one would claim that the size of a forest fire can be in any way attributed to the exceptional properties of the spark that ignited it, or the size of the tree that was the first to burn. Major forest fires require a conspiracy of wind, temperature, low humidity, and combustible fuel that extends over large tracts of land. Just as for large cascades in social influence networks, when the right global combination of conditions exists, any spark will do; and when it does not, none will suffice.
Upon reading that I was immediately brought back to something I wrote about last year. My thesis in that entry was that the marketing paradigm of leading with a single message was outdated and your better bet was to create a huge array of messages (sparks) hoping that one would ignite a cascade effect (forest fire). Especially in a digital context, where message production costs are significantly lowered, why not throw lots against a wall and see what sticks (after all, measurement and fast iteration are possible).
With all that said, there is one major issue I have with Watts and Dodds work, which they admit to in the paper: They are examining interpersonal influence, not media influence. While they admit that the distinction is a bit blurry, especially in the eyes of things like blogs, they continue on with the assumption (which doesn’t seem to be grounded in any research) that “the influence of the blogger seems closer to that of a traditional newspaper columnist or professional critic, than that of a trusted confidant, or a even casual acquaintance.” Now I don’t want to harp on bloggers, but I don’t know that I agree with this thesis.
Part of what makes blogs such a fascinating communications medium is the combination weak and strong ties that can constitute a readership. While large readership blogs (like BoingBoing for example) most likely reflect a more journalistic relationship, smaller blogs like this one act much differently. Of the thousand-plus readers who frequent this site I would guess that a significant portion constitute what I would consider a weak tie (we have emailed back and forth) and a smaller portion constitute strong ties (family and close friends). This, I would assume, is significantly different than the average “newspaper columnist or professional critic” who tend to live in another realm. In other words, the availability of bloggers may change how and when their influence functions.
This, of course, is a major critique I have of most communications theory. As my sister, who is getting her undergraduate degree in communications can attest to, I get incredibly upset when interpersonal communications disregards mediated communications. In our current age, the boundaries between interpersonal and mediated communications is hard to pin down. That’s because the same technologies (email, blogging and even text messaging) can be used for both broadcasting and interpersonal communications. Therefore, it’s left up to the recipient to decide whether the communications is interpersonal or not. Prior to that, interpersonal communications was done entirely via one-to-one media (things like face-to-face and phone). While I’m not sure how to resolve this, it does create a major issue in all influential research because it leaves the researcher with an incredible amount of variables to contend with.
Finally, I think a discussion of engagement is probably relevant as I think it’s directly correlated to influence (and when combined with reach may change things slightly). When we launched Street Mining we got two links from largeish sites, one with a very large, but more casual readership and one with a smaller, more dedicated one. While the larger site drove more clicks, the smaller site drove more signups. This, I believe, is the simplest explanation of engagement/influence I have seen: Clearly the smaller site’s readers were a better audience for the message than the larger site. (Of course the lack of control in this experiment means that it’s impossible to say whether it was these factors that led to additional sign-ups.)
This is an interesting paradox that I think relates to this whole influential debate and is often a stumbling block. Influence and reach are two entirely different things. While the two can be related (and historically have been), on the internet they’re not necessarily. For example, we’ve all heard about the “Digg effect” when I site gets to the front page of Digg and is hit with a deluge of traffic. What’s interesting about this traffic is that it often doesn’t result in much additional long-term interest, as the audience is a large and varied one. Therefore, while the site may be considered influential from a pure mass perspective, it’s influence seems much more superficial (I don’t have data to back this up, but have read many discussions on the subject). I would say that while Digg has a large reach and high influence (causing the influx of visitors), the engagement of that audience is low (meaning that they visit the dugg site once and don’t return). (Engagement is a bad word for this, but I’m having trouble thinking of another at the moment. If someone has a better way to describe it, please let me know.)
My argument would be that on many smaller sites the influence is deeper since those relationships tend to be stronger. This creates an interesting dynamic. While I don’t know that it’s statistically relevant, I do think it’s worth exploring some more. Blogs and other associated media do allow people to amplify their voices to more strong and weak ties than ever before, allowing people to have journalistic-sized audiences with relationships that more reflect interpersonal communication.
I think that’s about it. I hope I haven’t bored you to death (I can only imagine if you’ve actually made it to the bottom that I haven’t). Would love to hear your thoughts and feedback (on both Watts and Dodds’ paper and my thoughts). Thanks for reading.
Newer posts »