Regular Old Intelligence is Sufficient--Even Lovely
Thinking through the other possible apocalypse
Precisely twenty years ago, I published a book called “Enough” that outlined my fears about artificial intelligence and its companion technologies like advanced robotics and human genetic engineering. It did well enough, even coming out in a number of foreign editions (it turns out that the German for ‘enough’ is the delightful ‘genug.’) But like most warnings it came too early; indeed, warnings generally come too early until they are too late.
Now, however, we may be in a brief window when we have ears to hear. And so I’m going to take a day off from fretting about the thermometer to talk about this other peril—though, as we shall see, they’re related.
The cascading releases of chatbots over the last few months—the GPT family, Bing, Bard—have made it clear that some powerful new force is entering our world. These devices, ‘large language models’ trained by exposure to vast swaths of the internet, have cheerfully explained to people how to build explosives and how to secretly buy guns; they’ve tried to break up marriages and conspired to figure out how to escape the safeguards with which they’ve been equipped. Much of our interaction with them so far has been trivial—we’ve taught them tricks, as if they were pets, and asked them to write limericks. Experts have raised reasonable questions about whether they’re as yet sentient—they work, after all, like giant auto-correction devices, working their magic by guessing what word should come next.
There’s much about them we don’t understand—see Sue Halpern’s excellent essay in the New Yorker today, which points out how opaque the companies marketing these new entities have been. And as is usually the case, most commentary is glib: Tom Friedman has pronounced it “Promethean,” pointing out brightly that it could be “a tool or a weapon.”
But there’s clearly an undercurrent of profound unease. Friedman’s Times colleague, Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
He has two key findings, I think: one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
And second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
Taken together, those two things give rise to an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?’ Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist. (Probably the kind with stock options). Still, it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
When you read accounts of AI’s usefulness, the example that comes up most often is something called ‘protein folding.’ One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound. We’ve actually managed a great deal of scientific advance—maybe more than our societies can easily handle—without AI. What’s the rush?
The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,” which everyone reading this newsletter knows a bit about. As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth. We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
Beyond those, the cases become trivial, or worse. Here’s Klein, playing devil’s advocate: “I wish that I could draw things I can't. It's neat for me, that I can tell the computer what to draw and it will draw. It allows me to play around in art in a way I couldn't before.” Actually, though, playing Etch-a-Sketch with Dall-E, the drawing bot, isn’t really likely to produce deep satisfaction: what we know about human creativity is that is that for us to really lose ourselves we don’t need to be good at something, we just have to be at the limit of whatever our ability is. It’s in the struggle that we achieve the kind of bliss that comes with art, or chess, or whatever. Making it easier actually lessens the pleasure: the athletic equivalent of artificial intelligence is artificial strength or artificial endurance, achieved with various drugs. But as we’ve thought about them, we’ve decided that undermines the whole point of the enterprise. Running a marathon as fast as you can is the point; if finishing the distance as fast as possible was the goal, you could just drive a car.
All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans. It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race. Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point. But that’s only useful for a tiny group of shareholders, not for the rest of us.
Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage. A letter began circulating today, signed by dozens of AI professors and futurist thinkers like Yuval Harari: It calls on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” Some have challenged the motives of the signatories, and the fact that Elon Musk is backing it should either give you pause or remind you of an adage about stopped clocks. But slowing down all this work, by whatever peaceful means we can, seems sane to me. GPT-4 is in a remarkable rush: Friedman exults about how it produces mediocre poetry in a flash, and then translates it into Mandarin in another flash (mediocre Mandarin would be my guess). But what’s the hurry? We’re not short of good poetry—we’ve got far more of it than people read already, in any language.
And here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing. In individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions. It’s time to say do it again, and fast—faster than the next iteration of this tech.
We’re good at building things, but so are beavers and bees. Human beings are fascinating precisely because we can also not build things. It may be our highest calling.
Other news from out in the world:
+A beautiful new interview with David Suzuki, retiring at 86 from his many decades of hosting Canada’s favorite tv show, The Nature of Things, but not from activism. He sounds a clarion call for more people his age joining the fight
“The thing about elders that’s different in society is they don’t have to kiss anybody’s ass to get a job, or a raise, or a promotion,” he said. “They’re beyond worrying about money or power or celebrity so that they can speak a kind of truth…To me, hope is action.”
And if you have any doubts about how good us oldsters can be at this, check out the just-released two minute video of last week’s big Third Act bank demonstrations. It’s beautiful! (Have I watched it nineteen times? Probably)
+The Wall Street Journal, thankfully, is conducting a large-scale investigation to determine who backed the Israeli hacker-for-hire who did his best to compromise the email accounts of environmental activists fighting Exxon. Since I was one of their targets (in fact, my picture illustrates the article, though apparently they didn’t manage to get inside my machine), I am interested in the outcome. The hacker refuses to identify who it is that paid him to find out who was standing up to the world’s richest oil company. It’s…an enormous mystery.
+The Washington Post dives into the question of carbon-neutral beef
+Princeton gave some dude from Exxon his own office, and apparently he wandered around campus talking down fossil fuel divestment. Happily, he lost that fight, but provided a great lede for the Guardian’s study of how fossil fuel money infiltrates campuses
But Exxon, which is among a group of oil and gas companies that have funneled more than $700m into research partnerships with leading US universities since 2010, still maintains close ties to dozens of universities, and has a regular on-campus presence at a clutch of prestigious colleges.
At MIT, Exxon is provided office space through its funding of the MIT Energy Initiative research collaboration, and company representatives “come to campus from time to time to meet with principal investigators who are doing sponsored research and student fellows they sponsor”, a university spokesperson said.
I spent decades butting heads with my dad over one thing or another. He was a died-in-the -wool conservative, apoplectic when my mom would vote for democrats, appalled at, well, just about everything about modern life. He passed not long ago at the ripe old age of 93, and before he left us we were able to have some honest conversations about some of the things we used to fight over. He acknowledged that I was right about Nixon and Vietnam, and a little embarrassed at his early attitudes about minorities and gay people. He even conceded that there might be something to this whole climate change business, but he wasn't ready to let go entirely. "You're problem," he insisted, " Is that you're always right too soon."
I've been thinking about that ever since. It's infuriating how many people refuse to accept the evidence right before their eyes. But, maybe we need to accept, that they can't hear us until they're ready.
I don't know why I'm surprised when I read your posts -- at how well you write, how you carry the reader along almost effortlessly (on the reader's part, anyway), and how much emotion is conveyed without shouting. Just really beautiful writing. Subject not bad either :) Thanks.