37 Comments

I spent decades butting heads with my dad over one thing or another. He was a died-in-the -wool conservative, apoplectic when my mom would vote for democrats, appalled at, well, just about everything about modern life. He passed not long ago at the ripe old age of 93, and before he left us we were able to have some honest conversations about some of the things we used to fight over. He acknowledged that I was right about Nixon and Vietnam, and a little embarrassed at his early attitudes about minorities and gay people. He even conceded that there might be something to this whole climate change business, but he wasn't ready to let go entirely. "You're problem," he insisted, " Is that you're always right too soon."

I've been thinking about that ever since. It's infuriating how many people refuse to accept the evidence right before their eyes. But, maybe we need to accept, that they can't hear us until they're ready.

Expand full comment

to everything there is a season...

Expand full comment

May as well have been my Dad...

Expand full comment
Comment deleted
Mar 30, 2023Edited
Comment deleted
Expand full comment

Right? Remember those pre-Trump debates when we argued and hashed over politics and everything, rolled our eyes and made dismissive gestures, then played a round of golf with the other person or watched each other's kids play Team water archery or something? I think our division is by design, sadly...

Expand full comment

I don't know why I'm surprised when I read your posts -- at how well you write, how you carry the reader along almost effortlessly (on the reader's part, anyway), and how much emotion is conveyed without shouting. Just really beautiful writing. Subject not bad either :) Thanks.

Expand full comment

that...makes my day. i love to write, and i think my goal is, 'i hope i'm answering questions right at the moment they arise in a reader's mind.' i probably miss more often than not, but that makes the hits all the sweeter. thanks

Expand full comment

Amazing stuff. Loved the video-especially the chopping up of credit cards. Two thoughts: you mentioned Prometheus and ethics. Prometheus stole fire–from the god of blacksmiths–to give to humans, who promptly...weaponized themselves. It was no coincidence that Sargon raised the first professional armies and founded the first empire in the region metalworking was invented. We aren't capable of doing the right thing all the time the way gods are, which is why Prometheus was justly punished. We have also always struggled with morals and ethics. That's why it was so refreshing for me to read this piece, which boiled the AI question down to what is right and clearly not right. Wish more people considered this aspect. Keep up the good work!

Expand full comment

In the same way many of us have adopted "slow food" and "slow travel", it seems time to popularize "slow thinking". This is yet another really provocative piece, Bill.

But a note of caution: over the years, the many walls we've tried to erect between ourselves and the animal world have fallen like dominos. We once thought only we used tools, but we aren't. We once thought complex language set us apart, but it doesn't. We once thought our ability to envision alternate futures made us different, but it doesn't. Even Michael Pollan fell into this trap by proclaiming only humans cook (but alligators commonly braise their meals before feasting, among others.) So we shouldn't be surprised when we inevitably discover examples of animal discretion ;)

Expand full comment

That is a good caution!

I love bees, and I love their incredible endless doggedness at making honey. But it always does occur to me that it wouldn't help much to ask them if we needed more honey--we already know their answer!

Expand full comment

Again I am reminded of an apt Ian Malcom quip: “Scientists are actually preoccupied with an accomplishment. So focused on whether they can do something. They never stop to ask if they should do something.” I would substitute Silicon Valley startups for scientists in this case.

Expand full comment

one thing i find is that scientists are sometimes grateful when people don't make them make the decisions about deployment. there's really no reason why someone who's good at nuclear physics would also be good at figuring out whether or where to drop a bomb...

Expand full comment

Exactly right. We need people to look at the ethics and consequences of new discoveries and inventions.

Expand full comment

Bravo! One of your best pieces.

Though our little 3.21.23 cadre might have benefited from chatbot to instruct us exactly how to upload our jpgs and movies to third act's flickr account, there was no need: the collective spirit of the video and of the day itself was completely satisfying.

Expand full comment

Thank you so much for making 32123 work!

Expand full comment

To start with, I'd like to refer the audience to the enlightened perspective of Douglas Hofstadter https://m.youtube.com/watch?v=2xnr-ST6ITo&list=PLfdMKJMGPPtyiRloEtPHjzQ4F9upOyVkR&index=47. To cut to the chase, even now so-called Artificial Intelligence shows rather astounding incapacity, as anyone who has even a modest degree of multilingulism can verify using Google translate. The current situation resulted from the commercialization of human-machine interactions. Fundamental questions, such as what actually constitutes thought or intelligence, are unanswered because they are considered irrelevant to commercial aims. Moreover, it is in the interest of commercial entities to obfuscate, for example by referring to multilayer neural networks (itself a slightly misleading expression) as "deep learning". "AI" itself is a misnomer, an personally I distrust any mathematically conversant person who uses the term. Lay people are excluded, as most don't know better.

Perhaps the most pernicious failing of the "AI" community is its failure to disclose limitations that are inherent to the software. All algorithms are trained using data, and all data sets, being finite, are biased. As in statistics - which incidentally is very closely related - there are two kinds of errors, and nonzero probabilities of both.

All this having been said, the computer science field that encompasses the various "AI" apptoaches has produced many useful scientific results, enabling us to sift through otherwise incomprehensible volumes of multidimensional data. Stopping all work in the field, were that possible, would not be productiv.

The prolem with AI technology, like others, is not so much in the technology itself, but rather our unwise use of it. In nature, and also in effective human-devised systems, negative feedback control is an absolutely necessary an effective element. The internet, and particularly social media, stands out as a prime example of what happens when such regulation is ineffective or absent.

Expand full comment

I like Andrej Karpathy's (OpenAI -> Tesla -> OpenAI) branding of AI/machine learning as "Software 2.0". On one hand, that captures the fact that this is a step-change to a technology (software and digital communications) that has already had a transformational and accelerating impact on the world. On the other hand, it also captures that AI is just software - so often, recent popular-press articles (both those hyping its benefits and those highlighting its dangers) speak of it as if computer programmers are summoning a supernatural or alien being. A trendline on a Microsoft Excel scatter plot could be thought of as a very simple form of artificial intelligence that has "learned" the pattern in the data (terminology which anthropomorphizes things a bit), or as a basic statistical technique that's been around since the early 1800s.

It feels like safeguards and regulations would be best focused not on the models themselves (ex. banning models with over a trillion parameters), but instead on inputs and outputs. What input data can be used, and whether the creators of that data can opt in/out of having it used for training. Similarly, if the AI/software outputs data or makes decision recommendations, what levels of oversight/scrutiny is applied before it is implemented or shared. The "paperclip maximizer" thought experiment implicitly assumes that we've fully "handed over the keys" to the AI rather than using it as an advisory tool.

There's a nice summary report by an academic group called Climate Change AI that categorizes some of the many ways that machine learning is being used to enable climate change mitigation or adaptation. Both hardware and software technologies are valuable in combatting climate change, as are "natural compassion", thoughtful policy, and many other things. https://dl.acm.org/doi/pdf/10.1145/3485128

Expand full comment

thanks for this thoughtful summary!

Expand full comment

Oh gosh, I've been thinking so much about A.I. the past few days, trying to form my thoughts. I needed more information, which you have so beautifully given me here, Bill. Thank you for this. I will definitely google "paper clip maximizer." And huzzah to David Suzuki's big awesome quote.

Expand full comment

David Suzuki is a great hero of mine, and you are too

Expand full comment

Hear hear!!! This message is so important.

Expand full comment

you expressed exactly how I feel about every angle of this! Especially as one of those people who derives satisfaction from the process of creating (writing, art, etc.). Thank you, as always, for your thoughts.

Expand full comment

Bill, you don't know me but I was in the audience on Tuesday at the Techonomy event in Mountain View, California. You spoke via video conference. The energy in the room when you and Dan Costa chatted was warm, vibrant, committed. Perhaps because I've been a part of this movement since the late 1990's, I could feel that deep but frustrated wisdom in your voice. Indeed, how long does it have to take before we finally break the spell?

I've worked in the sustainability field since 1998... in the year 2000 I sat in the office of Dr. Bob Watson in DC, who'd just agreed to be a scientific advisor for a documentary project I was producing on climate change. That was right before he was booted from his position by a major oil company. that was 20 years ago! I know a human lifetime is short compared to geological time, but boy does it seem like forever when you're trying to shift entrenched forces.

You say in this piece, "The question that I think very few have bothered to answer is, why?" I would go a touch further and say it's a question few have bothered—or were perhaps afraid—to ask. And yet, more and more of us are asking. I hear it, I feel it. We need to keep speaking truth to power, keep raising our voices, and above all, keep connecting with each other. For the climate, for our children, and for a slower, stabler, saner world.

I want you to know there are more of us like you, like your readers here, like the young people of Sunrise, than even you imagine. We're here and we're with you.

Expand full comment

thank you very much for those kind words--and thank you even more for keeping up with the fight!

Expand full comment

The problem with AI has absolutely nothing to do with AI and everything to do with the ideology of the programmers that write the iterative algorithms within the AI. AI is actually a misnomer, because AI never has, and never will, be a form of "thinking". AI has always been a way to boost the speed of machine decision(s) on what to do when a given cascade of electronic inputs from sensors flags a need for some sort of action, or when there is a need to plow through some gigantic data base to gather stats on some kind of data.

Because of the 'greed is good' based assumption that, as long as pecuniary gain is the result of AI software decisions, regardless of whether they result in biosphere degradation, the "Intelligence" attributed to AI is also a misnomer. In a reality based world, ANY decision to profit over planet cannot, by any stretch of the Social Darwinist wishful thinking imagination, be considered "intelligent".

AI will ALWAYS reflect the ideology of the programmers that wrote the iterative algorithms. IF said ideology is the morally bankrupt, profit over people and planet, Social Darwinist Ideology, then AI will put this evil on steroids because AI just does, whatever it does, faster. IF, on the other hand, the ideology of the programmers is "Depart From Evil (i.e. DO NO direct or indirect HARM!), Do Good, Seek Peace, AND Pursue It", AI will actually help humanity. And YES, you CAN have an ETHICS BASED BUSINESS that is PROFITABLE. In a reality based, irrefutable biosphere math world, that would be the ONLY type of business model that is not subject to criminal prosecution. Unfortunately for us, too many members in good standing of TPTB are Social Darwinists. Like Neo-Darwinists, NONE OF THEM are reality based.

New Peer-Reviewed Paper Challenges Neo-Darwinism https://soberthinking.createaforum.com/sound-christian-doctrine/darwin/msg956/#msg956

Finally, let me warn all those people out there that are bombarded with all this hype about AI doing "all these great things for us". AI is a TOOL. TPTB are NOT interested in using that TOOL to make the world a "better, safer and healthier place", no matter what they claim. The proof of that is that you have NEVER HEARD ONE WORD about using AI to run the Courts! Now why do you suppose they don't want AI in there? Isn't it "faster"? Isn't it "good"? Isn't it "smart"? Isn't it "ethical"? If AI is such a "big benefit to society", why isn't it used to make our Criminal "Justice" System REALLY JUST?

I'm glad you asked. AI is not allowed in the Courts because AI is CONSISTENT in whatever, good or evil, it does.

THINK, people!

Expand full comment

Last part was truncated. Here it is:

Finally, let me warn all those people out there that are bombarded with all this hype about AI doing "all these great things for us". AI is a TOOL. TPTB are NOT interested in using that TOOL to make the world a "better, safer and healthier place", no matter what they claim. The proof of that is that you have NEVER HEARD ONE WORD about using AI to run the Courts! Now why do you suppose they don't want AI in there? Isn't it "faster"? Isn't it "good"? Isn't it "smart"? Isn't it "ethical"? If AI is such a "big benefit to society", why isn't it used to make our Criminal "Justice" System REALLY JUST?

I'm glad you ;) asked. AI is not allowed in the Courts because AI is CONSISTENT in whatever, good or evil, it does.

THINK, people!

Expand full comment

That last bit is really important.

Expand full comment

For those of us who agree that "we're already good enough", I disagree. It begs the question of "good enough for what?". If raising the consciousness of humanity is worthwhile, and in theory I agree, I'm reminded that we may have run out of time. I'm pretty sure that more than half of the 8 billion+ souls alive today don't think of much beyond how to survive today, or find enough to eat today, or how to avoid being killed, or how to get somewhere where there's a better chance of solving their basic needs of survival. Before the pandemic, most of the world was reportedly at least moderately vulnerable, falling into either the High Vulnerability group (14%) or the Moderate Vulnerability group (39%). That means that roughly more than a billion were at high risk for surviving and 4 billion humans could have died in the last year. Global warming, as measured by CO2 emissions, seems to be progressing exponentially and that will not help those who don't live in a reasonably wealthy country. For those of us who do there seems to be a near term limit for survival. The NRDC has said we may not survive beyond 2035. According to Henry Gee, at Scientific American, "I suspect that the human population is set not just for shrinkage but collapse—and soon." This raises the question of "is humanity committing ecocide." God bless the children... good for what?

Expand full comment

Good questions these. It seems to me we know the answers and they mostly involve things like 'sharing,' and redistributing power. AI isn't going to help us there, but we are clearly capable of it if we choose

Expand full comment

Thanks for this excellent commentary. I bet that one of the reasons many people want to build AI and other new things is plain curiosity. AI potentially sounds so amazing and cool that it's hard not to want to see what it could do. On the other hand, you're right, what's the rush. Especially with something so potentially dangerous.

As it is, we're building AI from another direction. With all the micro-/nanoplastics everywhere, including human tissue, all of us intelligent humans are on our way to becoming artificial....

Expand full comment