I’ve set a reasonably modest goal for myself of writing 10 blog posts in April. Let’s see if I can get back on this bike (since I really miss it). This is post number 4.
There’s lots of stuff I read that I either haven’t gotten a chance to write up yet or don’t warrant their own post. This is meant to be my space for all that.
On the one hand, the most popular use of the word mining isn’t really mining at all (“Bitcoin mining, involving pure information rather than raw materials, is just a sexier term (is mining sexy?) for a process that is more like Sudoku puzzles for computers than digging holes in the ground.”), on the other hand it’s 13x cheaper to extract metals like copper and gold from discarded electronics than to actually mine them. (As an aside, go subscribe to Kneeling Bus, it’s great.)
I’ve got to write something bigger about this, but I don’t think people understand just how little we can glean from what’s inside a deep learning neural network. This isn’t about Uber or Tesla specifically, but articulates the problem well:
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
With that said, there are lots of people trying to get a peek inside.
Speaking of self-driving cars, can we stop talking about the trolley problem … please?!?!?
And one last thing on self-driving cars and machine learning:
At the end of last year I ran across this fun excerpt of life on the road as a long-haul mover. I just ripped through the book in a few days. It’s a nice break from biographies of Claude Shannon and parables about bottlenecks in IT.
Here’s a simple sounding question that has a lot more to it: Why did we evolve the ability to reason? “Objectively, a reasoning mechanism that aims at sounder knowledge and better decisions should focus on reasons why we might be wrong and reasons why other options than our initial hunch might be correct. Such a mechanism should also critically evaluate whether the reasons supporting our initial hunch are strong. But reasoning does the opposite. It mostly looks for reasons that support our initial hunches and deems even weak, superficial reasons to be sufficient.”
A few weeks ago I had a long conversation about the relationship between technology and democracy, the core question being whether or not the two were at odds. I haven’t been able to get it out of my brain since. This piece on weaponized narrative is an interesting addition to the conversation as was this Long Now talk asking “Can Democracy Survive the Internet?” (You can find this on their podcast as well.)
Speaking of unintended consequences, Waze keeps sending people down the steepest hill in LA and they keep crashing into things.
Speaking of podcasts, I’ve been really enjoying Felix Salmon’s Slate Money podcast and the latest episodes on the new economics of Hollywood is a great primer on how things have changed.
A voter thinking of popping to the polls and then trying out a new pizzeria would be perfectly rational in checking out TripAdvisor, rather than the party manifestos. This is because her vote will almost certainly not make any difference to her life, but her choice of restaurant almost certainly will. We vote because we see it as a civic duty, or a way of being part of something bigger than ourselves. Few people go to the polls under the illusion that they will be casting the deciding vote.
The full explanation for why MIT broke ties with Nectome, a company that promises to store your brain for you (by killing you), is pretty amazing. Here’s a bit to whet your appetite: “Regarding the second point: currently, we cannot directly measure or create consciousness. Given that limitation, how can one say if, for example, a computer or a simulation is conscious?”
While Netcome can legally kill you to store your brain, Starbucks has to warn you about the cancer risks of coffee in California. Statistician David Spiegelhalter presents a pretty good argument for why this is ridiculous.
Just today ran into this interesting essay on the end of authenticity. I particularly enjoyed this bit about Brooklyn (where I happen to live):
At the same time “Brooklyn” has become America’s most significant cultural export. It’s not only 3rd and 4th-tier American cities that adopted the aesthetic. Among many other major international cities, the Shoreditch area of London developed its own version of Contemporary Conformism, as did Daikanyama in Tokyo (where the “Brooklyn” brand possesses cultural cachet as an update to the Americana aesthetic Japanese subcultures have fetishized for 70 years). Of course, America’s other main export during this time was Silicon Valley startup culture and the two found a perfect union and perfect distribution channel in AirBnB and WeWork.
And, last but not least, I wrote three blog posts:
- Why Coke Cost a Nickel for 70 Years Video Style
- The Fermi Paradox
- Why Videogames Tend Towards Post-Apocalyptic
Thanks for reading. Enjoy your weekend.