It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
Countdown to Zero Day(Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
Annnnnd here’s my 10th blog post of the month. Hit my goal. (Might even make it to 11 if I have a burst of inspiration.) Thanks again for reading and encouragement. I’m going for 10 again in May. As usual, feedback welcome and you can subscribe by email here (for those of you reading this via email, thanks and sorry about the wasted words, it just emails exactly what I put on the web).
Black infants in America are now more than twice as likely to die as white infants — 11.3 per 1,000 black babies, compared with 4.9 per 1,000 white babies, according to the most recent government data — a racial disparity that is actually wider than in 1850, 15 years before the end of slavery, when most black women were considered chattel. In one year, that racial gap adds up to more than 4,000 lost black babies. Education and income offer little protection. In fact, a black woman with an advanced degree is more likely to lose her baby than a white woman with less than an eighth-grade education.
By rendering a not-too-distant future, Kubrick set himself up for a test: thirty-three years later, his audiences would still be around to grade his predictions. Part of his genius was that he understood how to rig the results. Many elements from his set designs were contributions from major brands—Whirlpool, Macy’s, DuPont, Parker Pens, Nikon—which quickly cashed in on their big-screen exposure. If 2001 the year looked like “2001” the movie, it was partly because the film’s imaginary design trends were made real.
The show offers a clever finger trap for critics. Call a hit dangerous and you imply that it’s really quite sexy. And, in fact, the seventh episode, which I won’t spoil, pulls a daring switcheroo, one that may offer a new lens through which to interpret Roseanne’s behavior. It’s not enough. The reboot nods at complexity without delivering—there are good people on many sides, on many sides. If you squint, you might see the show’s true hero as Darlene (Sara Gilbert), a broke single mom forced to move in with that charismatic bully Roseanne. But, if that were so, we might understand Darlene’s politics, too. We’d more fully feel her pain and also that of her two kids, transplanted to a place they find foreign and unwelcoming.
This is where the promise of artificial intelligence breaks down. At its heart is an assumption that historical patterns can reliably predict future norms. But the past—even the very recent past—is full of words and ideas that many of us now find repugnant. No system is deft enough to respond to the rapidly changing varieties of cultural expression in a single language, let alone a hundred. Slang is fleeting yet powerful; irony is hard enough for some people to read. If we rely on A.I. to write our rules of conduct, we risk favoring those rules over our own creativity. What’s more, we hand the policing of our discourse over to the people who set the system in motion in the first place, with all their biases and blind spots embedded in the code. Questions about what sorts of expressions are harmful to ourselves or others are difficult. We should not pretend that they will get easier.
On the other end of the sporting spectrum, the Times got a hold of tapes from a meeting between players and owners and I can’t imagine it making the NFL look worse. Here’s a small example from Buffalo Bills owner Terry Pegula: “For years we’ve watched the National Rifle Association use Charlton Heston as a figurehead … We need a spokesman.” These guys are such bad news.
I’ve set (what I originally thought was) a reasonably modest goal for myself of writing 10 blog posts in April. Two more to go with one week left. Thanks for following along and please let me know what you think. Also, you can now subscribe to the blog by email. Sign up here.
Alright alright alright. Quick status check for me: Spent the week out in SF for Percolate’s Transition Conference where I gave a talk about how to use supply chain thinking and the Theory of Constraints to deal with the content marketing bottleneck (I’ll share the video when it gets online). We’ll be in London in early June, so if you’re around and interested in coming please reach out. I just finished the book Soldiers of Reason which is about the history of the RAND Corporation (I’ve got a half-written post I’ll try to get out about it). I’m taking a break from game theory and nuclear warfare with Andy Weir’s new book Artemis (which I haven’t heard was great, but I liked The Martian a lot and my library hold came through the day I finished the other book). Now onto the links.
On a more serious tip, The New Yorker had my favorite profile of Sorrell. For what it’s worth, I met him once or twice and emailed with him a few times and my takeaway were a) he knows his company and the ad industry inside out, b) he emailed me back immediately, and c) he was a good performer (it was a lot of fun to watch him interview folks on stage and make them wiggle a bit, especially media owners).
Two really excellent long-form pieces from this week:
That isn’t to say the hearings went over perfectly, even at home. One mystifying thing to employees was that Zuckerberg frequently seemed to come up short when asked for details about the advertising business. When pressed by Roy Blunt (R-Missouri)—who, Zuckerberg restrained himself from pointing out, was a client of Cambridge Analytica—Facebook’s CEO couldn’t specify whether Facebook tracks users across their computing devices or tracks offline activity. He seemed similarly mystified about some of the details about the data Facebook collects about people. In total, Zuckerberg promised to follow up on 43 issues; many of the most straight-ahead ones were details on how the ad business works. It’s possible, of course, that Zuckerberg dodged the questions because he didn’t want to talk about Facebook’s tracking on national TV. It seemed more likely to some people on the inside, however, that he genuinely didn’t know.
I’m a really big fan of security analyst/guru/cryptographer Bruce Schneier. I’ve been reading his blog for years and actually got a chance to meet him in November at a talk he did for a very small room of us on the NSA and just about anything else anyone wanted to talk about. Schneier is one of the people Edward Snowden allowed access to his documents, which obviously gives him a particularly interesting point of view on the subject. His basic take was best summarized in three statements: (1) This isn’t overly surprising and won’t be going away anytime soon, (2) the very best thing that happened out of all this is that the private companies involved have been exposed and some, like Cisco, have seen their business fundamentally hurt, and (3), everything else aside the one thing to know about everything the NSA was/is doing is that it doesn’t work. The last is obviously the most damming (and Schneier is definitely not the only one saying this). This method of collecting everything with hope of finding something just doesn’t work as well as good, old-fashioned, detective work.
Interestingly I was talking about the Snowden/NSA stuff with a friend from DC who mentioned that the story hadn’t gotten a ton of coverage there (as compared to government shutdown or Healthcare.gov) because it’s perceived as an issue people don’t really have a problem with. Basically we have seen over and over again that we’re willing to throw away liberties for our “freedom” and to fight “terrorism.” Not much to say on this one, just an interesting take.
Probably the biggest problem with the public’s perception of security is that things are secure as a default. We see this a lot in the voting industry. The voting machine companies will come up with an internet voting machine or electronic voting machine and the onus will be on the security company to prove that it’s broken. It’ll be assumed secure, and that’s just nonsense. When you see a new system, you have to assume it’s insecure, unless you can prove it’s secure. The public perception is reversed. “I have a door lock, it’s secure unless you show me you can break it.” That’s not right—it’s insecure unless you can show me that it is secure.
The second is on the sort of security threats Schneier finds most threatening:
I’m most worried about potential security vulnerabilities in the powerful institutions we’re trusting with our data, with our security. I’m worried about companies like Google and Microsoft and Facebook. I’m worried about governments, the US and other governments. I’m worried about how they are using our data, how they’re storing our data, and what happens to it. I’m less worried about the criminals. I think we’ve kinda got cyber-crime under control, it’s not zero but it never will be. I’m much more worried about the powerful abusing us than the un-powerful abusing us.