I’ve been experimenting with a daily email with Colin Nagy called Why Is This Interesting? This is from today’s edition. If you’re interested in checking it out, drop me a line (I’ll post something here when we launch in publicly).
This weekend the Times ran an opinion piece about the dangers of backup cameras. It was about more than that, obviously, but the gist of the genre is that all this new tech is lulling us into a sense of security that leaves us susceptible to over-reliance, and even forgetting entirely how to do things on our own.
Why is this interesting?
Because this is something we’ve been worried about forever (literally). In Phaedrus, Plato worried about roughly the same thing as it related to writing: “If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder.”
The reality is that all technology affects culture in expected and unexpected ways. “We shape our tools and thereafter our tools shape us” is one of my favorite aphorisms (misattributed to McLuhan). The irony, of course, is that the complaints in this article are perfectly expected. We come to rely on automation because it’s mostly better. In fact, the strangest part of the whole piece is the way the evidence of backup camera safety is presented. “Between 2008 and 2011,” the author writes, “the percentage of new cars sold with backup cameras doubled, but the backup fatality rate declined by less than a third while backup injuries dropped only 8 percent.” I think the implication is that those numbers aren’t all that impressive, but a 20 or 30 percent drop in backup fatalities seems pretty excellent to me.
The Times piece is effectively an explorations of McLuhan’s four effects. The backup camera enhances our senses by giving us eyes in the back of our heads, obsolescing the car’s mirrors, and retrieving a time when cars were smaller, but, as the article points out, when pushed to its extreme it reverses our own role as driver, giving control entirely over the tech. While the points are valid, we should be less surprised that this keeps happening and try to keep things in perspective.
I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 5!
Over the last few weeks I’ve been asked a lot about my take on the Facebook news and I’ve struggled to add much to the conversation. I’m not shocked (this story has been around since 2015 in almost exactly its current form, a fact I don’t think nearly enough people understand), we shouldn’t be calling it a breach or a leak (that’s not what happened), and I think it has a lot more to do with the new European data regulations called GDPR than most are mentioning. Outside of that I’m mostly left pondering questions/thought experiments like what is the minimum amount of targeting Facebook would have to hold on to in order to maintain 80% of its ad revenue (aka minimum viable targeting) and did they actually end up in this mess in an effort to directly make more money (the FB wants more data to sell to advertisers argument) or in an effort to drive engagement (which, of course, helps make more money). Not sure that second one matters, but it’s interesting to me nonetheless.
Anyway, mostly I’m left looking for opinions that go beyond the recitation of facts.
On Sunday morning I was reading the Times opinion section and ran into an idea that felt new. Here it is from Jonathan Zittrain’s op-ed “Mark Zuckerberg Can Still Fix This Mess”:
On the policy front, we should look to how the law treats professionals with specialized skills who get to know clients’ troubles and secrets intimately. For example, doctors and lawyers draw lots of sensitive information from, and wield a lot of power over, their patients and clients. There’s not only an ethical trust relationship there but also a legal one: that of a “fiduciary,” which at its core means that the professionals are obliged to place their clients’ interests ahead of their own.
The legal scholar Jack Balkin has convincingly argued that companies like Facebook and Twitter are in a similar relationship of knowledge about, and power over, their users — and thus should be considered “information fiduciaries.”
Information fiduciary is one of the first things I’ve read in all the morass of Facebook think-pieces that felt both new and useful. The basic idea is that Facebook (and other similar platforms) have a special relationship with users that resembles the kind of fiduciary responsibilities doctors and lawyers have with our data (critically, Balkin makes a distinction between the responsibility for data and advice, the latter of which Facebook obviously doesn’t have).
Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favors whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users—but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behavior?
Balkin argues that we don’t really have any way to stop Facebook from doing that legally. The First Amendment gives them the right to political speech. We could hope that they wouldn’t do it because of the backlash it would likely create (and it’s true that it would probably be enough to prevent them), but do we feel good relying on the market in this case?
After going through a bunch of options for dealing with the situation, Balkin lands on the fiduciary concept. “Generally speaking, a fiduciary is one who has special obligations of loyalty and trustworthiness toward another person,” he writes. “The fiduciary must take care to act in the interests of the other person, who is sometimes called the principal, the beneficiary, or the client. The client puts their trust or confidence in the fiduciary, and the fiduciary has a duty not to betray that trust or confidence.”
The DMPA would provide a predictable level of federal immunity for those companies willing to subscribe to the duties of an information fiduciary and accept a corresponding process to disclose and redress privacy and security violations. As with the DMCA, those companies unwilling to take the leap would be left no worse off than they are today—subject to the tender mercies of state and local governments. But those who accept the deal would gain the consistency and calculability of a single set of nationwide rules. Even without the public giving up on any hard-fought privacy rights recognized by a single state, a company could find that becoming an information fiduciary could be far less burdensome than having to respond to multiple and conflicting state and local obligations.
This feels like a real idea that has value for all parties involved and a legitimate framework for implementation. I don’t know that it will ever come to pass, but I’m excited to continue paying attention to the conversations around it.
Over at the Percolate blog I wrote up a twopart series around a talk I gave at our client summit on the history of brand management and the need to create a new system of record for marketing. Part one opens:
Late last week James wrote a post called Moving from Installation to Deployment, where he laid out a framework for thinking how technology moves throughout history and where our modern age fits into the puzzle. As part of his post he introduced some ideas from an economist named Carlota Perez, who argues that each technological revolution (of which we’re in our fifth) follows a similar pattern of installation, where we essentially lay out the new technology in the form of infrastructure, followed by deployment, where we finally get a chance to build upon that infrastructure and realize its value.
Whereas part two dives into the implications and a framework for building this new system of record for marketing:
To approach the problem of scaling marketing at the rate of technology to address the increasing complexity, we have to take a page out of the P&G brand management playbook, Rising Tide: Lessons from 165 Years of Brand Building at Procter & Gamble. It points out how “P&G recognized that building brands is not exclusively or even primarily a marketing activity. Rather it is a systems problem.” This is fundamental. When you’re dealing with a huge amount of change and complexity as tempting as it is to answer the question with a one off solution, the systemic path is always more powerful. This is where we have to start in solving the challenge of rethinking marketing for this new age.
Differing opinions (sort of) from the New York Times over whether technology is or isn’t what the science-fiction writers imagined. From a November article titled “In Defense of Technology”:
Physical loneliness can still exist, of course, but you’re never friendless online. Don’t tell me the spiritual life is over. In many ways it’s only just begun. Technology is not doing what the sci-fi writers warned it might — it is not turning us into digits or blank consumers, into people who hate community. Instead, there is evidence that the improvements are making us more democratic, more aware of the planet, more interested in the experience of people who aren’t us, more connected to the mysteries of privacy and surveillance. It’s also pressing us to question what it means to have life so easy, when billions do not. I lived through the age of complacency, before information arrived and the outside world liquified its borders. And now it seems as if the real split in the world will not only be between the fed and the unfed, the healthy and the unhealthy, but between those with smartphones and those without.
And now, in response to the Sony hack, Frank Bruni writes, “The specter that science fiction began to raise decades ago has come true, but with a twist. Computers and technology don’t have minds of their own. They have really, really big mouths.” He continues:
“Nothing you say in any form mediated through digital technology — absolutely nothing at all — is guaranteed to stay private,” wrote Farhad Manjoo, a technology columnist for The Times, in a blog post on Thursday. He issued a “reminder to anyone who uses a digital device to say anything to anyone, ever. Don’t do it. Don’t email, don’t text, don’t update, don’t send photos.” He might as well have added, “Don’t live,” because self-expression and sharing aren’t easily abandoned, and other conduits for them — landlines, snail mail — no longer do the trick.
Yet there is deep uncertainty about how the pattern will play out now, as two trends are interacting. Artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. … At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Americans between the ages of 55 and 64 are among the most skilled in the world, according to a recent report from the Organization for Economic Cooperation and Development. Younger Americans are closer to average among the residents of rich countries, and below average by some measures.
My opinion falls into the protopian camp: Things are definitely getting better, but new complexities are bound to emerge as things change. It’s not going to be simple and there are lots of questions we should be asking ourselves about how technology is changing us and the world, but it’s much healthier to start from a place of positivity and recognition that much of the change is good change.
Once people started to read, and once books were in circulation, very quickly the population of Europe realized that they were farsighted. This is interestingly a problem that hadn’t occurred to people before because they didn’t have any opportunity to look at tiny letter forms on a page, or anything else that required being able to use your vision at that micro scale. All of a sudden there is a surge in demand for spectacles. Europe is awash in people who were tinkering with lenses, and because of their experimentation, they start to say, “Hey, wait. If we took these two lenses and put them together, we could make a telescope. And if we take these two lenses and put them together, we could make a microscope.” Almost immediately there is this extraordinary scientific revolution in terms of understanding and identifying the cell, and identifying the moons of Jupiter and all these different things that Galileo does. So the Gutenberg press ended up having this very strange effect on science that wasn’t about the content of the books being published.
As I’ve established here, I’m a big McLuhan fan, and this is pretty good evidence that the effect of the medium is often much more important than the specific message.