I’ve set (what I originally thought was) a reasonably modest goal for myself of writing 10 blog posts in April. Two more to go with one week left. Thanks for following along and please let me know what you think. Also, you can now subscribe to the blog by email. Sign up here.
Alright alright alright. Quick status check for me: Spent the week out in SF for Percolate’s Transition Conference where I gave a talk about how to use supply chain thinking and the Theory of Constraints to deal with the content marketing bottleneck (I’ll share the video when it gets online). We’ll be in London in early June, so if you’re around and interested in coming please reach out. I just finished the book Soldiers of Reason which is about the history of the RAND Corporation (I’ve got a half-written post I’ll try to get out about it). I’m taking a break from game theory and nuclear warfare with Andy Weir’s new book Artemis (which I haven’t heard was great, but I liked The Martian a lot and my library hold came through the day I finished the other book). Now onto the links.
On a more serious tip, The New Yorker had my favorite profile of Sorrell. For what it’s worth, I met him once or twice and emailed with him a few times and my takeaway were a) he knows his company and the ad industry inside out, b) he emailed me back immediately, and c) he was a good performer (it was a lot of fun to watch him interview folks on stage and make them wiggle a bit, especially media owners).
Two really excellent long-form pieces from this week:
That isn’t to say the hearings went over perfectly, even at home. One mystifying thing to employees was that Zuckerberg frequently seemed to come up short when asked for details about the advertising business. When pressed by Roy Blunt (R-Missouri)—who, Zuckerberg restrained himself from pointing out, was a client of Cambridge Analytica—Facebook’s CEO couldn’t specify whether Facebook tracks users across their computing devices or tracks offline activity. He seemed similarly mystified about some of the details about the data Facebook collects about people. In total, Zuckerberg promised to follow up on 43 issues; many of the most straight-ahead ones were details on how the ad business works. It’s possible, of course, that Zuckerberg dodged the questions because he didn’t want to talk about Facebook’s tracking on national TV. It seemed more likely to some people on the inside, however, that he genuinely didn’t know.
I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 5!
Over the last few weeks I’ve been asked a lot about my take on the Facebook news and I’ve struggled to add much to the conversation. I’m not shocked (this story has been around since 2015 in almost exactly its current form, a fact I don’t think nearly enough people understand), we shouldn’t be calling it a breach or a leak (that’s not what happened), and I think it has a lot more to do with the new European data regulations called GDPR than most are mentioning. Outside of that I’m mostly left pondering questions/thought experiments like what is the minimum amount of targeting Facebook would have to hold on to in order to maintain 80% of its ad revenue (aka minimum viable targeting) and did they actually end up in this mess in an effort to directly make more money (the FB wants more data to sell to advertisers argument) or in an effort to drive engagement (which, of course, helps make more money). Not sure that second one matters, but it’s interesting to me nonetheless.
Anyway, mostly I’m left looking for opinions that go beyond the recitation of facts.
On Sunday morning I was reading the Times opinion section and ran into an idea that felt new. Here it is from Jonathan Zittrain’s op-ed “Mark Zuckerberg Can Still Fix This Mess”:
On the policy front, we should look to how the law treats professionals with specialized skills who get to know clients’ troubles and secrets intimately. For example, doctors and lawyers draw lots of sensitive information from, and wield a lot of power over, their patients and clients. There’s not only an ethical trust relationship there but also a legal one: that of a “fiduciary,” which at its core means that the professionals are obliged to place their clients’ interests ahead of their own.
The legal scholar Jack Balkin has convincingly argued that companies like Facebook and Twitter are in a similar relationship of knowledge about, and power over, their users — and thus should be considered “information fiduciaries.”
Information fiduciary is one of the first things I’ve read in all the morass of Facebook think-pieces that felt both new and useful. The basic idea is that Facebook (and other similar platforms) have a special relationship with users that resembles the kind of fiduciary responsibilities doctors and lawyers have with our data (critically, Balkin makes a distinction between the responsibility for data and advice, the latter of which Facebook obviously doesn’t have).
Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favors whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users—but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behavior?
Balkin argues that we don’t really have any way to stop Facebook from doing that legally. The First Amendment gives them the right to political speech. We could hope that they wouldn’t do it because of the backlash it would likely create (and it’s true that it would probably be enough to prevent them), but do we feel good relying on the market in this case?
After going through a bunch of options for dealing with the situation, Balkin lands on the fiduciary concept. “Generally speaking, a fiduciary is one who has special obligations of loyalty and trustworthiness toward another person,” he writes. “The fiduciary must take care to act in the interests of the other person, who is sometimes called the principal, the beneficiary, or the client. The client puts their trust or confidence in the fiduciary, and the fiduciary has a duty not to betray that trust or confidence.”
The DMPA would provide a predictable level of federal immunity for those companies willing to subscribe to the duties of an information fiduciary and accept a corresponding process to disclose and redress privacy and security violations. As with the DMCA, those companies unwilling to take the leap would be left no worse off than they are today—subject to the tender mercies of state and local governments. But those who accept the deal would gain the consistency and calculability of a single set of nationwide rules. Even without the public giving up on any hard-fought privacy rights recognized by a single state, a company could find that becoming an information fiduciary could be far less burdensome than having to respond to multiple and conflicting state and local obligations.
This feels like a real idea that has value for all parties involved and a legitimate framework for implementation. I don’t know that it will ever come to pass, but I’m excited to continue paying attention to the conversations around it.
Google also has a drone program—in April it bought one of Ascenta’s competitors, Titan Aerospace—but what’s notable about its approach so far is that it has been almost purely technological and unilateral: we want people to have the Internet, so we’re going to beam it at them from a balloon. Whereas Facebook’s solution is a blended one. It has technological pieces but also a business piece (making money for the cell-phone companies) and a sociocultural one (luring people online with carefully curated content). The app is just one part of a human ecosystem where every-body is incentivized to keep it going and spread it around. “Certainly, one big difference is that we tend to look at the culture around things,” Zuckerberg says. “That’s just a core part of building any social project.” The subtext being, all projects are social.
This is a pretty interesting point of difference between how the two companies view the world. Google sees every problem as a pure technical issue whereas Facebook sees it as part cultural and part technical. I’m not totally sure I buy it (it seems unfair to call Android a purely technical solution), but it’s an interesting lens to look through when examining two of the world’s most important tech companies.
As we all know, Google has very publicly announced its intention to build G+ into a massive social platform at any cost. For awhile I think many simply nodded and metaphorically patted Google on the head, as if to say, “sure Google, whatever you say.” However, as Android has continued to grow, I’ve noticed something very interesting: It seems that Google’s plan to turn G+ into a platform is to hitch its wagon to Android. With over a billion users it’s hard to argue with that strategy.
We can expect to see Facebook deemphasizing traditional advertising units in favor of promoted news stories in your stream. The reason is that the very best advertising is content. Blurring the lines between advertising and content is one of the most ambitious goals a marketer could have. Bringing earnings expectations into this, the key to Facebook “fixing” their mobile advertising problem is not to create a new ad-unit that performs better on mobile. Rather, it is for them to sell the placement of stories in the omnipresent single column newsfeed. If they are able to nail end-to-end promoted stories system, then their current monetization issues on mobile disappear.
The only thing I’d add to this (which I tweeted yesterday) is why would brands be treated differently than people on Facebook? If any of us post something to FB it will only reach a portion of our friends, so why should a brand be able to reach 100% of their fans? It’s a filtered platform and that’s what makes it different than Twitter and Tumblr.