Welcome to the bloggy home of Noah Brier. I'm the co-founder of Percolate and general internet tinkerer. This site is about media, culture, technology, and randomness. It's been around since 2004 (I'm pretty sure). Feel free to get in touch. Get in touch.

You can subscribe to this site via RSS (the humanity!) or .

Information Fiduciaries

I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 5!

Over the last few weeks I’ve been asked a lot about my take on the Facebook news and I’ve struggled to add much to the conversation. I’m not shocked (this story has been around since 2015 in almost exactly its current form, a fact I don’t think nearly enough people understand), we shouldn’t be calling it a breach or a leak (that’s not what happened), and I think it has a lot more to do with the new European data regulations called GDPR than most are mentioning. Outside of that I’m mostly left pondering questions/thought experiments like what is the minimum amount of targeting Facebook would have to hold on to in order to maintain 80% of its ad revenue (aka minimum viable targeting) and did they actually end up in this mess in an effort to directly make more money (the FB wants more data to sell to advertisers argument) or in an effort to drive engagement (which, of course, helps make more money). Not sure that second one matters, but it’s interesting to me nonetheless.

Anyway, mostly I’m left looking for opinions that go beyond the recitation of facts.

On Sunday morning I was reading the Times opinion section and ran into an idea that felt new. Here it is from Jonathan Zittrain’s op-ed “Mark Zuckerberg Can Still Fix This Mess”:

On the policy front, we should look to how the law treats professionals with specialized skills who get to know clients’ troubles and secrets intimately. For example, doctors and lawyers draw lots of sensitive information from, and wield a lot of power over, their patients and clients. There’s not only an ethical trust relationship there but also a legal one: that of a “fiduciary,” which at its core means that the professionals are obliged to place their clients’ interests ahead of their own.

The legal scholar Jack Balkin has convincingly argued that companies like Facebook and Twitter are in a similar relationship of knowledge about, and power over, their users — and thus should be considered “information fiduciaries.”

Information fiduciary is one of the first things I’ve read in all the morass of Facebook think-pieces that felt both new and useful. The basic idea is that Facebook (and other similar platforms) have a special relationship with users that resembles the kind of fiduciary responsibilities doctors and lawyers have with our data (critically, Balkin makes a distinction between the responsibility for data and advice, the latter of which Facebook obviously doesn’t have).

In his much longer and surprisingly readable paper on the idea he lays out an argument for why we should take the concept seriously. The paper starts by replaying a question Zittrain posed in 2014 New Statesman article after Facebook ran a get out the vote experiment that drove impressive numbers:

Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favors whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users—but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behavior?

Balkin argues that we don’t really have any way to stop Facebook from doing that legally. The First Amendment gives them the right to political speech. We could hope that they wouldn’t do it because of the backlash it would likely create (and it’s true that it would probably be enough to prevent them), but do we feel good relying on the market in this case?

After going through a bunch of options for dealing with the situation, Balkin lands on the fiduciary concept. “Generally speaking, a fiduciary is one who has special obligations of loyalty and trustworthiness toward another person,” he writes. “The fiduciary must take care to act in the interests of the other person, who is sometimes called the principal, the beneficiary, or the client. The client puts their trust or confidence in the fiduciary, and the fiduciary has a duty not to betray that trust or confidence.”

In a more recent blog post Balkin argues that Facebook has effectively confirmed the idea with his response to Cambridge Analytica when Zuckerberg said, “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again.”

But how would it all work? Well, Zittrain and Balkin tackled that too. In a 2016 Atlantic article, they present a theoretical framework for application in a similar way to the Digital Millennium Copyright Act (DMCA) which, while it has its flaws, is a solution that seems to generally work for the various parties involved. Here’s their proposal for a Digital Millennium Privacy Act (DMPA):

The DMPA would provide a predictable level of federal immunity for those companies willing to subscribe to the duties of an information fiduciary and accept a corresponding process to disclose and redress privacy and security violations. As with the DMCA, those companies unwilling to take the leap would be left no worse off than they are today—subject to the tender mercies of state and local governments. But those who accept the deal would gain the consistency and calculability of a single set of nationwide rules. Even without the public giving up on any hard-fought privacy rights recognized by a single state, a company could find that becoming an information fiduciary could be far less burdensome than having to respond to multiple and conflicting state and local obligations.

This feels like a real idea that has value for all parties involved and a legitimate framework for implementation. I don’t know that it will ever come to pass, but I’m excited to continue paying attention to the conversations around it.

April 9, 2018 // This post is about: , , ,

Why Privacy Matters

I like this explanation of the importance of privacy from Glenn Greenwald, who has been the main outlet for all things Snowden:

And let me just say one other thing: sometimes it is hard to convey why privacy is so important, because it’s kind of ethereal. But I think people instinctively understand the reason it’s so important, because they do things like put passwords on their email accounts and locks on their bedroom and bathroom doors, which reflect a desire to keep others out of certain spaces where they can go to be alone. That’s a way of making clear that they value privacy. And the reason privacy is so critical is because it’s only when we know we’re not being watched that we can engage in creativity, or dissent, or pushing the boundaries of what’s deemed acceptable. A society in which people feel like they’re always being watched is one that breeds conformity, because people will avoid doing anything that can prompt judgment or condemnation. This is a crucial part of why a surveillance state is so damaging — it’s why all tyrannies know that watching people is the key to keeping them in line. Because only when you’re not being watched can you really be a free individual.

July 24, 2013 // This post is about: , ,

Santa’s Privacy Policy

Too good not to share: McSweeney’s has Santa’s Privacy Policy (originally published in 2010). A snippet:

The letters also provide another important piece of information—fingerprints. We run these through databases maintained by the FBI, CIA, NSA, Interpol, MI6, and the Mossad. If we find a match, it goes straight on the Naughty List. We also harvest a saliva sample from the flap of the envelope in which the letter arrives in order to establish a baseline genetic identity for each correspondent. This is used to determine if there might be an inherent predisposition for naughtiness. A detailed handwriting analysis is performed as part of a comprehensive personality workup, and tells us which children are advancing nicely with their cursive and which are still stubbornly forming block letters with crayons long past the age when this is appropriate.

December 24, 2012 // This post is about: , , ,

Stalked Redux

A few years ago I had a story written about me. The premise was a journalist went and did a bunch of research about me and then approached me with all she had collected to get my reaction. Unfortunately, the publication she wrote it in is now defunct and so she reposted it over at Forbes today with the following intro:

I wrote this magazine piece back in 2009 when I was first delving into privacy issues in the digital age. It was published in 2010 in the Assembly Journal. However, a Twitter user recently pointed out to me that the piece is no longer online… which is rather sad for a piece about online privacy. “Confessions of an Online Stalker” was the headline my editors chose. I would have named it “Confessions of a Digital Lurker.” Here it is in all of its dated glory.

At the time I actually wrote a response to her piece which was also published in the magazine, and thus is also now missing from the web. Since Kashmir, the author, has reposted her piece I thought it might also be a good idea to repost my response:

The last issue of the magazine featured a piece titled Confessions of an Online Stalker. Its author, Kashmir Hill, “stalked” me, collecting all the information publicly available on the web about my life and presenting me with my dossier over a cup of coffee in Soho. Included were some basic facts (age and address), interests (most-listened to songs and books on my Amazon wish-list) and the occasional tidbit that was unknown to me (the value of my parents’ house, for instance).

When I was asked to write a response, I wasn’t sure one was warranted. The article actually captures my reaction fairly well. I wasn’t all that surprised about any of the information the author dug up, as I could identify the source of almost all her data points. And while it certainly is a bit uncomfortable to see them (or hear them) together, given the motive of the exercise, it was not all that frightening. But there is a bit of context I’d like to add: it’s the sort of story that raw data doesn’t always tell.

I work and live on the web. I play with just about every new site I can get my hands on and post a fair amount of information that I don’t consider to be particularly personal about myself. I started a blog six years ago because I was writing for a magazine and found I had more to say than could fit in my 2,500-word monthly limit. I explored the medium and posted things that I now look back on and smack myself in the head over because of their asininity. But back then, as well as now, my job was to understand, or at least to have an opinion on, the state of digital media, on how and why people use the web.

But all of that sounds much more clinical than the reality of the situation. It’s been my opinion for some time that by putting things out into the world for public view, I’ve made my life more interesting (mostly by the friends that content has connected me to). In fact, I met my wife because of my blog. Let me explain.

On July 12, 2006 I wrote an entry asking if anyone from my blog world wanted to meet up in New York and have coffee. I got one response from a guy named Piers who ran (and still runs) a trend blog called PSFK. From there we developed an idea for a coffee meetup we decided to call likemind. About a month later, after holding two likeminds, a blogger in London named Russell Davies wrote a post praising the idea. In the comments to that post, a woman named Johanna mentioned that she was moving to New York City and was excited to go to likemind. Attached to her comment was her url, which I followed to an email address that I used to welcome her to the city and invite her to likemind. Three months later, when I was on the hunt for a new job, I mentioned it to Johanna, who had since moved north, attended a few likeminds and become a friend. She suggested that I come speak to the folks at the company she worked for: Naked Communications, a marketing strategy firm that was started in London. I went for it and two months later (it’s February, 2007 at this point) I announced I was joining the company as a strategist. I became friends with, and later started dating, Leila Fernandes, another strategist at the company. Two months ago we were married in Queens. Johanna helped us celebrate.

All of that is a long way of saying I see a lot of value in the sharing of information online. I am not in the camp that believes technology is pulling us apart, but rather that it offers us never-before-possible opportunities to come together and meet people you’d otherwise never have a chance to meet. I also don’t reside on the side that argues privacy is dead. While the author was able to collect a lot of information on me, there wasn’t much in there I hadn’t chosen to post myself with an understanding of the implications (not to mention the vast majority of it could have been collected in the pre-web days, albeit in a much more time-consuming manner).

One of my favorite digital thinkers, Danah Boyd, recently had this to say on the subject:

Privacy isn’t a technological binary that you turn off and on. Privacy is about having control of a situation. It’s about controlling what information flows where and adjusting measures of trust when things flow in unexpected ways. It’s about creating certainty so that we can act appropriately. People still care about privacy because they care about control. Sure, many teens repeatedly tell me “public by default, private when necessary” but this doesn’t suggest that privacy is declining; it suggests that publicity has value and, more importantly, that folks are very conscious about when something is private and want it to remain so. When the default is private, you have to think about making something public. When the default is public, you become very aware of privacy. And thus, I would suspect, people are more conscious of privacy now than ever. Because not everyone wants to share everything to everyone else all the time.

The control Boyd was referring to is probably slightly easier for me than most. When something happens like Facebook’s latest changes to their privacy settings, about thirty of the hundreds of blogs and other new sources I subscribe to write in-depth stories on the implications. Within hours of the changes I had been to the new settings page and tweaked everything to my liking, including deciding to keep certain information out of the public eye. I recognize this is not the norm, but it’s this kind of awareness that shapes my views on the sharing of information.

At the end of the day a breach of privacy requires some reasonable expectation that something would be kept private. Not only did I not have that expectation, but for much of the information I put on the web I hope for exactly the opposite.

October 8, 2012 // This post is about: , ,

Photos of Zuck

This is sort of interesting. Gizmodo is paying $20 per-photo for new pictures of Mark Zuckerberg (and asking some real questions about privacy):

For someone who doesn’t believe in privacy, Mark Zuckerberg is awfully guarded. He has made Facebook public by default, and yet his own public posts are few, far-between, and tend towards the anodyne. Facebook’s share-everything CEO even went so far as to keep his recent wedding a secret from his own friends, presumably to avoid public scrutiny. For all his bluster about public sharing, Zuckerberg reveals very little of himself. That needs to change.

June 7, 2012 // This post is about: , , ,