I started and stopped this post four times as I tried to find the right way to open. Eventually I got tired of searching and figured it was easiest to just jump off the note I wrote to myself in Google Keep after the idea popped into my head:
That might not make so much sense (yet), but like any good note it captured enough of the concept that I remembered what I was thinking when I wrote it. I jotted it down as I was prepping for a webinar I did last week offering up some predictions for marketing in 2019. I was getting worked up (as I’m wont to do) about how much it bugs me when everyone in marketing talks about AI as if they have any idea what it really means or the implications.1 Someone asked why it bothered me so much and my answer, which kind of just poured out, was that once everyone starts agreeing about something (and saying it endlessly) it becomes less and less meaningful. This is not just some soft definition of the word meaning, though, it literally has less information.
A few months ago I wrote about Claude Shannon and information theory. Shannon wrote a seminal paper in 1948 called “A Mathematical Theory of Communication“. In it he defined the measure of information as, effectively, its unexpectedness (he called it entropy). The more random, the more information. This is precisely what bits measure (you can think of it as the number of yes/no questions it would take to get to the answer). What happens when you compress a photo? You take away the randomness. That’s why otherwise complex surfaces like sky or skin might come to look a bit pixelated: The compression algorithm is constraining the number of hues available in order to bring down the entropy (and therefore the file size) of the whole photo.
What does that mean for marketing buzzwords?
Well, as everyone starts to say the same thing and continue to offer little behind it, it becomes more and more expected and, therefore, starts to carry less and less information. When people layer on top of those buzzwords with real examples or alternative ideas, they return some randomness (and therefore information) to the concept. At their best, marketing contrarians are attempting to breathe some life into words and ideas that have otherwise lost their information content.
I don’t really like to think of myself as a contrarian because I think that often carries with it some notion of being different for the sake of being different (and trolling). Rather, I think if everyone is following one strategy or idea, the value of being the next person to jump on board is incrementally less (especially when that idea is poorly defined/understood). In a way it’s like an anti-network effect.
Back to Hinkie’s letter. It was leaked and provided an amazing view into the psyche of someone who was willing to be a pariah. In it he paints an interesting picture of the connection between contrarianism and traditionalism.
Here he is on contrarianism:
To develop truly contrarian views will require a never-ending thirst for better, more diverse inputs. What player do you think is most undervalued? Get him for your team. What basketball axiom is most likely to be untrue? Take it on and do the opposite. What is the biggest, least valuable time sink for the organization? Stop doing it. Otherwise, it’s a big game of pitty pat, and you’re stuck just hoping for good things to happen, rather than developing a strategy for how to make them happen.
And on traditionalism:
While contrarian views are absolutely necessary to truly deliver, conventional wisdom is still wise. It is generally accepted as the conventional view because it is considered the best we have. Get back on defense. Share the ball. Box out. Run the lanes. Contest a shot. These things are real and have been measured, precisely or not, by thousands of men over decades of trial and error. Hank Iba. Dean Smith. Red Auerbach. Gregg Popovich. The single best place to start is often wherever they left off.
Let’s bring it back to buzzwords.
So basically Hinkie’s argument is that the most appropriate way to be a contrarian is to also be a traditionalist: To be a respectful student of the underlying principles while also constantly probing and questioning whether they still make sense. One of the things that surprises me about the marketing industry is how often people miss this tradeoff. In an attempt to play the contrarian they shun traditional wisdom, but at the same time they repeat empty phrases and approaches at every conference that will let them on stage.
I actually think one of the reasons Byron Sharp’s book How Brands Grow has picked up as much steam as it has is because it strikes a good balance between these things. It’s a contrarian take (loyalty shouldn’t be a goal because it’s an outcome) but at the same time it’s deeply rooted in some traditional marketing ideas (marketshare, reach, and creativity to name three). This is a tough balance to strike, but when someone hits the spot is has the opportunity to really resonate.
Unfortunately, most of the time the industry misses the market by a lot. What we end up with a bunch of anti-historical/anti-intellectual slogans that get repeated ad-infinitum. It’s lots of words and little information.
Here’s the notes I had for the question: “Let me start by saying that I predict in 2019 marketers will continue to talk about AI and ML interchangeably with no idea what the words mean. (I’m particularly salty about this.) I would broadly see we will continue to see ML become more available as different kinds of wrappers are made available that enables folks to use it in more of their everyday work. This seems to be some of what Microsoft and Google are doing with smart integrations into their work suites. In general, my take on AI/ML is it’s a classic case of Amara’s law, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” In the short term, these things aren’t going to be writing copy and, anyway, that’s not that big a deal. In the long term, the promise of ML is data modeling and coding written by computers, not people. That’s definitely not a 2019 prediction, but it’s the road we’re going down.”↑
I recognize that the word/idea transformation belongs in the buzzword bucket, but if you read about Hinkie and what he did I think it’s a fair use of the word with real meaning. He was a heretic who questioned the most fundamental law of professional sports (“you play every game to win”) and rewrote the path to building a championship contender.↑
It’s been awhile since I did a Remainders posts so I figured I’d throw one together. In theory it’s all the other stuff I didn’t get a chance to blog about. In reality, it’s pretty much everything I’ve been reading that isn’t about mental models/frameworks (and even some of that). You can find previous versions filed under Remainders and, as always, if you enjoy the writing, please subscribe by email and pass around.
Let’s start with some books. Here’s what I’ve read in the last three months (in order of when they were read):
Countdown to Zero Day(Kim Zetter): As far as I know this is the definitive book on Stuxnet, the digital weapon that targeted the Iranian nuclear facility at Natanz.
Complexity: A Guided Tour (Melanie Mitchell): Easily one of my favorite books of the year. I’ve read lots about complexity theory, but nothing that pulled all the various strings together so well. (This also helped send me down a deep physics rabbit hole that I’ve yet to emerge from.)
A Brief History of Time (Stephen Hawking): If you find yourself in a physics rabbit hole, this seems like something worth reading …
Dreamtigers (Jorge Luis Borges): I read about this in the Borges interview book. He basically explained that his publisher asked for a book and so he collected a bunch of poems and stories that were sitting around his house and hadn’t been published and stuck it together.
Okay, onto some other reading, etc. …
This Wired piece about the possibility of a coming “AI cold war” has two particularly interesting strings in it: One is a fundamental question about the nature of technology and its relationship with democracy (put simply: is the internet better structured to support or defeat democratic ideals) and the other is about how China (and the US) will use 5G as a power play (“If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.”)
Benoît Mandelbrot (of fractal fame) is apparently responsible (at least in part) for the introduction of passwords at IBM. From When Einstein Walked with Gödel (which I’m reading now), “When his son’s high school teacher sought help for a computer class, Mandelbrot obliged, only to find that soon students all over Westchester County were tapping into IBM’s computers by using his name. ‘At that point, the computing center staff had to assign passwords,’ he says. ‘So I can boast-if that’s the right term-of having been at the origin of the police intrusion that this change represented.'”
Also from the same book, the low numerals are meant to be representative of the number of things they are. Since that makes no sense, here’s the quote from the book: “Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing.”
A Rochester garbage plate “is your choice of cheeseburger, hamburger, Italian sausages, steak, chicken, white or red hots*, served on top of any combination of home fries, french fries, baked beans, and/or macaroni salad.”
Rahimi believes contemporary machine learning models’ successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine learning models are so complex and opaque that researchers often don’t understand why a machine learning model can output a particular response from a set of data inputs, aka the black box problem. Rahimi believes the lack of theoretical understanding or technical interpretability of machine learning models is cause for concern, especially if AI takes responsibility for critical decision-making.
Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.
Annnnnd here’s my 10th blog post of the month. Hit my goal. (Might even make it to 11 if I have a burst of inspiration.) Thanks again for reading and encouragement. I’m going for 10 again in May. As usual, feedback welcome and you can subscribe by email here (for those of you reading this via email, thanks and sorry about the wasted words, it just emails exactly what I put on the web).
Black infants in America are now more than twice as likely to die as white infants — 11.3 per 1,000 black babies, compared with 4.9 per 1,000 white babies, according to the most recent government data — a racial disparity that is actually wider than in 1850, 15 years before the end of slavery, when most black women were considered chattel. In one year, that racial gap adds up to more than 4,000 lost black babies. Education and income offer little protection. In fact, a black woman with an advanced degree is more likely to lose her baby than a white woman with less than an eighth-grade education.
By rendering a not-too-distant future, Kubrick set himself up for a test: thirty-three years later, his audiences would still be around to grade his predictions. Part of his genius was that he understood how to rig the results. Many elements from his set designs were contributions from major brands—Whirlpool, Macy’s, DuPont, Parker Pens, Nikon—which quickly cashed in on their big-screen exposure. If 2001 the year looked like “2001” the movie, it was partly because the film’s imaginary design trends were made real.
The show offers a clever finger trap for critics. Call a hit dangerous and you imply that it’s really quite sexy. And, in fact, the seventh episode, which I won’t spoil, pulls a daring switcheroo, one that may offer a new lens through which to interpret Roseanne’s behavior. It’s not enough. The reboot nods at complexity without delivering—there are good people on many sides, on many sides. If you squint, you might see the show’s true hero as Darlene (Sara Gilbert), a broke single mom forced to move in with that charismatic bully Roseanne. But, if that were so, we might understand Darlene’s politics, too. We’d more fully feel her pain and also that of her two kids, transplanted to a place they find foreign and unwelcoming.
This is where the promise of artificial intelligence breaks down. At its heart is an assumption that historical patterns can reliably predict future norms. But the past—even the very recent past—is full of words and ideas that many of us now find repugnant. No system is deft enough to respond to the rapidly changing varieties of cultural expression in a single language, let alone a hundred. Slang is fleeting yet powerful; irony is hard enough for some people to read. If we rely on A.I. to write our rules of conduct, we risk favoring those rules over our own creativity. What’s more, we hand the policing of our discourse over to the people who set the system in motion in the first place, with all their biases and blind spots embedded in the code. Questions about what sorts of expressions are harmful to ourselves or others are difficult. We should not pretend that they will get easier.
On the other end of the sporting spectrum, the Times got a hold of tapes from a meeting between players and owners and I can’t imagine it making the NFL look worse. Here’s a small example from Buffalo Bills owner Terry Pegula: “For years we’ve watched the National Rifle Association use Charlton Heston as a figurehead … We need a spokesman.” These guys are such bad news.
Differing opinions (sort of) from the New York Times over whether technology is or isn’t what the science-fiction writers imagined. From a November article titled “In Defense of Technology”:
Physical loneliness can still exist, of course, but you’re never friendless online. Don’t tell me the spiritual life is over. In many ways it’s only just begun. Technology is not doing what the sci-fi writers warned it might — it is not turning us into digits or blank consumers, into people who hate community. Instead, there is evidence that the improvements are making us more democratic, more aware of the planet, more interested in the experience of people who aren’t us, more connected to the mysteries of privacy and surveillance. It’s also pressing us to question what it means to have life so easy, when billions do not. I lived through the age of complacency, before information arrived and the outside world liquified its borders. And now it seems as if the real split in the world will not only be between the fed and the unfed, the healthy and the unhealthy, but between those with smartphones and those without.
And now, in response to the Sony hack, Frank Bruni writes, “The specter that science fiction began to raise decades ago has come true, but with a twist. Computers and technology don’t have minds of their own. They have really, really big mouths.” He continues:
“Nothing you say in any form mediated through digital technology — absolutely nothing at all — is guaranteed to stay private,” wrote Farhad Manjoo, a technology columnist for The Times, in a blog post on Thursday. He issued a “reminder to anyone who uses a digital device to say anything to anyone, ever. Don’t do it. Don’t email, don’t text, don’t update, don’t send photos.” He might as well have added, “Don’t live,” because self-expression and sharing aren’t easily abandoned, and other conduits for them — landlines, snail mail — no longer do the trick.
Yet there is deep uncertainty about how the pattern will play out now, as two trends are interacting. Artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. … At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Americans between the ages of 55 and 64 are among the most skilled in the world, according to a recent report from the Organization for Economic Cooperation and Development. Younger Americans are closer to average among the residents of rich countries, and below average by some measures.
My opinion falls into the protopian camp: Things are definitely getting better, but new complexities are bound to emerge as things change. It’s not going to be simple and there are lots of questions we should be asking ourselves about how technology is changing us and the world, but it’s much healthier to start from a place of positivity and recognition that much of the change is good change.