Eli Pariser, who started MoveOn.org has a new book out that I keep running into called The Fitler Bubble. As I understand it (and this is most likely a very surface reading, since I haven’t actually read it), Pariser tries to educate the world on the dangers (or at least impacts) of all the algorithms that filter our digital lives for us.
In an interview over at Brain Pickings he digs in a little bit on some of the stuff the book discusses. His points seem to center around two things: First, the false notion that the internet is killing serendipity and the true notion that there are privacy implications to all the data being collected to power all these filters (which we all knew).
What’s more interesting to me than questions about whether the web is killing serendipity (it’s not) is why algorithms are written in the way they are (which very few seem to ask). Pariser brings up an interesting example with Netflix:
Netflix uses an algorithm called Root Mean Squared Error (RMSE, to geeks), which basically calculates the “distance” between different movies. The problem with RMSE is that while it’s very good at predicting what movies you’ll like — generally it’s under one star off — it’s conservative. It would rather be right and show you a movie that you’ll rate a four, than show you a movie that has a 50% chance of being a five and a 50% chance of being a one. Human curators are often more likely to take these kinds of risks.
My question is, “why?” Why has Netflix (and many others) decided that conservative is the only direction for an algorithm. Why do recommendation systems tend to recommend closest neighbors when often you already know and have actively chosen not to pay attention to them? Is it that the people writing these things are worried that people won’t be able to accept the sort of risk taking that Pariser sees in human curators from a machine? I mean, I think I agree with that but has anyone tested it? (It could turn out the answers to all these questions are in the book which I haven’t read, in which case I apologize.)
Algorithms are nothing more than a whole bunch of people’s opinions transferred into a mathematical equation. Understanding that is incredibly important and I suspect is a major reason Pariser is bringing up the topic. There is a hidden politics in the algorithms that drive our world. Those politics are not necessarily right or left, but that doesn’t matter. Google for years has been driving home this (false) idea that the black box is truth and only recently did they publicly come clean, admitting that it was just their opinions on what’s good and bad driving their algorithmic decisions. And maybe that’s not a bad thing, maybe they do represent the best interests of all of us internet citizens. But then again, maybe they don’t, and at the very least it seems like people should know what’s driving their results.