It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
Countdown to Zero Day(Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
West calls his struggle the right to be a “free thinker,” and he is, indeed, championing a kind of freedom—a white freedom, freedom without consequence, freedom without criticism, freedom to be proud and ignorant; freedom to profit off a people in one moment and abandon them in the next; a Stand Your Ground freedom, freedom without responsibility, without hard memory; a Monticello without slavery, a Confederate freedom, the freedom of John C. Calhoun, not the freedom of Harriet Tubman, which calls you to risk your own; not the freedom of Nat Turner, which calls you to give even more, but a conqueror’s freedom, freedom of the strong built on antipathy or indifference to the weak, the freedom of rape buttons, pussy grabbers, and fuck you anyway, bitch; freedom of oil and invisible wars, the freedom of suburbs drawn with red lines, the white freedom of Calabasas.
This hits close to home: Your coffee addiction, by decade. “‘No sugar,’ you declare. ‘I take it black.’ Shoot a side-eyed glance at that kid over there with his blended-ice drink—amateur hour. Sorry they don’t serve Shirley Temples, geez.”
On the podcast front, I’ve been enjoying Real Famous, which features interviews with ad people (many of whom are my friends). Paul Feldwick, author of the awesome book Anatomy of a Humbug, is an excellent listen.
Multitasking, in short, is not only not thinking, it impairs your ability to think. Thinking means concentrating on one thing long enough to develop an idea about it. Not learning other people’s ideas, or memorizing a body of information, however much those may sometimes be useful. Developing your own ideas. In short, thinking for yourself. You simply cannot do that in bursts of 20 seconds at a time, constantly interrupted by Facebook messages or Twitter tweets, or fiddling with your iPod, or watching something on YouTube.
You could say the trouble for Rodger started when, around puberty, he began to know—and, in writing, recite—the first and last names of every boy he considered a sexual competitor, while at the same time referring to girls almost always collectively. Girls. Pretty girls. Pretty blond girls. Only three girls (or perhaps, by this time, women) are listed by name in My Twisted World, vis-a-vis dozens of boys (I’m not including family members). By the end of his writing and life, he’s failed to distinguish between any groups of humans at all, to the point where he considers his 6-year-old brother yet another budding Romeo who, because “he will grow up enjoying the life [Rodger has] craved for,” must die. “Girls will love him,” Rodger says. “He will become one of my enemies.” Rodger begs our most individuating question—“why don’t you love me?”—by proving himself repeatedly unable to individuate another. In erotic coupling, the ego finds relief in its equal. But had Elliot Rodger ever found his equal and opposite in another human being, he would, by all indications, have been repulsed. Reading him, I kept remembering Rooney Mara’s kiss-off in The Social Network: “You are going to go through life thinking that girls don’t like you because you’re a nerd.1 [Or short. Or half-Asian. Or bad at football, or not a real ladies’ man, or somehow else disappointing to the ur-dads of America.] And I want you to know, from the bottom of my heart, that isn’t true. It’ll be because you’re an asshole.”
Chunking was originally conceptualized in the groundbreaking work of Herbert Simon in his analysis of chess—chunks were envisioned as the varying neural counterparts of different chess patterns. Gradually, neuroscientists came to realize that experts such as chess grand masters are experts because they have stored thousands of chunks of knowledge about their area of expertise in their long-term memory. Chess masters, for example, can recall tens of thousands of different chess patterns. Whatever the discipline, experts can call up to consciousness one or several of these well-knit-together, chunked neural subroutines to analyze and react to a new learning situation. This level of true understanding, and ability to use that understanding in new situations, comes only with the kind of rigor and familiarity that repetition, memorization, and practice can foster.
The computer takes a reading from a Geiger counter that measures radiation in the surrounding air, specifically the radioactive isotope Americium-241. The reading is expressed as a long number of code; that number gives the generator its true randomness. The random number is called the seed, and the seed is plugged into the algorithm, a pseudorandom number generator called the Mersenne Twister. At the end, the computer spits out the winning lottery numbers.
If you haven’t heard the Google Duplex calls, go have a listen. Some interesting comments from Twitter:
Jessi Hempel: “Reading about Google’s Duplex: Design is a series of choices, and creating voice tech designed to let humans trick other humans is a choice humans are making, not an inevitable consequence of technology’s evolution.”
Stewart Brand: “This sounds right. The synthetic voice of synthetic intelligence should sound synthetic. Successful spoofing of any kind destroys trust. When trust is gone, what remains becomes vicious fast.”
The New York Times’s Weinstein report was a believability project years in the making: it systematized abuse, turned it into a pattern your eye could follow. There were interviews, emails, audio recordings, legal documents; facts were double- and triple-checked. But its paradoxical consequence was to set the bar far too high for every subsequent story whose breaking it had made possible. What’s a little masturbation between friends when the king of Hollywood kingmakers had employed former agents of the Israel Defense Forces to silence his accusers? In one final act of gaslighting, Weinstein made all other abuse look not so bad and all other evidence look not so good.