A Kind of Dialectic

I can’t help myself. I mean look at that photogenic Hydrangea in our front yard, and the iconic countenance of the Airedale. I spotted her at a beach-side watering hole where I sometimes stop for a craft beer while out cycling.

Hydrangea (aka “water vessel”)
Airedale (aka King of the Terriers)
Day drinkers

Us with our friends from Virginia. They’re here with us for a week and then poof, back on a plane to return home. Before long the photos and videos on their phones capturing their experiences here will descend deeper and deeper into the gallery as new photos are saved, new experiences will bury the old, until some future day a friend, perhaps, will inquire of one of them, “Who are these ex-pat Alaskans you visited this past summer?” Out the phone will come, furious scrolling will ensue, until…tap, “Ah ha, here we all are!” “Oh,” she’ll say, thrusting the phone forward to share our pixelated pusses, “we had such a great time with them!”

At least we will hope they did.

Do you fear an artificial intelligence (AI) may soon go rogue and pose an existential threat to humanity? I do not. Hmm, curious. On your view, are there biological intelligence(s) (BI) that exist today (e.g. certain groups of other humans) antagonistic to your interests and values? Yes, of course. Are these BIs spreading disinformation? Some of them, yes. Is this getting worse? Are you kidding – ever heard of Twitter?! OK, so would it be fair to conclude that if one of these BIs which you believe is “misaligned” with your interests and values, and is accelerating the spread of disinformation and hate, were to become augmented with (come to posses) weapons of mass destruction (say), could it pose an existential risk to you and others of like mind? Absolutely – people all over the world, not just in this country, who share my values and interests are murdered with alarming frequency by BIs with misaligned values using traditional weaponry. I see, so if a biological intelligence may evolve to become antagonistic to your interests and values, and come to pose an existential risk to you, why not an artificial intelligence? For the same reason apples are not oranges. Don’t get me wrong, I have no doubt an augmented AI could be trained to kill me and others. In fact, you don’t need an AI for that, we have stupid drones that do our killing for us now. But no AI now, or ever, will come to posses human values or interests*. In-silico computer programs (regardless of sophistication) are not human brains. The two are different things entirely. So it’s a categorical error of sorts to say the interests or values of an AI may become misaligned with the values and interests of a BI. If you must fear a current or future threat to your existence, you would be wise to focus on the clear and present danger posed by misaligned BIs, not AIs. Hmm, curious. So are you in the camp of people who believe that AI, far from posing a future threat to humanity, instead presents opportunity? Short answer, yes!

* Or for that matter the values and interests of other BIs. The notion that an AI may evolve to dis-value, or become misaligned toward the simple pleasure of a Raven’s flop and roll down a snowbank – play for play’s sake – I think is but one example of what the linked essay above alludes to as superstitious hand-waving.