tisdag 1 december 2015

What I think about What to Think About Machines That Think

Our understanding of the future potential and possible risks of artificial intelligence (AI) is, to put it gently, ruefully incomplete. Opinions are drastically divided, also among experts and leading thinkers, and it may be a good idea to hear from several of them before forming an opinion of one's own, to which it is furthermore a good idea to attach a good deal of epistemic humility. The 2015 anthology What to Think About Machines That Think, edited by John Brockman, offers 186 short pieces by a tremendously broad range of AI experts and other high-profile thinkers. The format is the same as in earlier items in a series of annual collections by the same editor, with titles like What We Believe but Cannot Prove (2005), This Will Change Everything (2009) and This Idea Must Die (2014). I do like these books, but find them a little bit difficult to read, in about the same way that I often have difficulties reading poetry collections: I tend to quickly rush towards the next poem before I have taken the time necessary to digest the previous one, and as a result I digest nothing. Reading Brockman's collections, I need to take conscious care not to do the same thing. To readers able to handle this aspect, they have a lot to offer.

I have still only read a minority of the short pieces in What to Think About Machines That Think, and mostly by writers whose standpoints I am already familiar with. Many of them do an excellent job expressing important points within the extremely narrow page limit given, such as Eliezer Yudkowsky, whom I consider to be one of most important thinkers in AI futurology today. I'll take the liberty of quoting at some length from his contribution to the book:
    The prolific bank robber Willie Sutton, when asked why he robbed banks, reportedly replied, "Because that's where the money is." When it comes to AI, I would say that the most important issues are about extremely powerful smarter-than-human Artificial Intelligence (aka superintelligence) because that's where the utilons are - the value at stake. More powerful minds have bigger real-world impacts.

    [...]

    Within the issues of superintelligence, the most important (again following Sutton's Law) is, I would say, what Nick Bostrom termed the "value loading problem": how to construct superintelligences that want outcomes that are high-value, normative, beneficial for intelligent life over the long run - that are, in short, "good" - since if there is a cognitively powerful agent around, what it wants is probably what will happen.

    Here are some brief arguments for why building AIs that prefer "good" outcomes is (a) important and (b) likely to be technically difficult.

    First, why is it important that we try to create a superintelligence with particular goals? Can't it figure out its own goals?

    As far back as 1739, David Hume observed a gap between "is" questions and "ought" questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is and then begins using words like should, ought, or ought not. From a modern perspective, we'd say that an agent's utility function (goals, preferences, ends) contains extra information not given in the agent's probability distribution (beliefs, world-model, map of reality).

    If in 100 million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with one another, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume's insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the > (the preference ordering) first entered the system and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume's regress and exhibit a slightly different mind that computes < instead of > on that score too.

    I don't particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of, for example, paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome. If we want an AI to do its own moral reasoning, Hume's Law says we need to define the framework for that reasoning. This takes an extra fact beyond the AI having an accurate model of reality and being an excellent planner.

    But if Hume's Law makes it possible in principle to have cognitively powerful agents with any goals, why is value loading likely to be difficult? Don't we just get whatever we programmed?

    The answer is that we get what we programmed, but not necessarily what we wanted. The worrisome scenario isn't AIs spontaneously developing emotional resentment for humans. It's that we create an inductive value learning algorithm and show the AI examples of happy smiling humans labeled as high-value events - and in the early days the AI goes around making existing humans smile and it looks like everything is OK and the methodology is being experimentally validated; and then, when the AI is smart enough, it invents molecular nanotechnology and tiles the universe with tiny molecular smiley-faces. Hume's Law, unfortunately, implies that raw cognitive power does not intrinsically prevent this outcome, even though it's not the result we wanted.

    [...]

    For now, the value loading problem is unsolved. There are no proposed full solutions, even in principle. And if that goes on being true over the next decades, I can't promise you that the development of sufficiently advanced AI will be at all a good thing.

Max Tegmark has views that are fairly close to Yudkowsky's,1 but warns, in his contribution, that "Unfortunately, the necessary calls for a sober research agenda that's sorely needed is being nearly drowned out by a cacophony of ill-informed views", among which he categorizes the following eight as "the loudest":
    1. Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists appear incapable of writing an AI-article without a picture of a gun-toting robot.

    2. "It's impossible": As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there's no law of physics preventing us from building even more intelligent quark blobs.

    3. "It won't happen in our lifetime": We don't know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we'd be foolish to dismiss the possibility as mere science fiction.

    4. "Machines can't control humans": Humans control tigers not because we are stronger, but because we are smarter, so if we cede our position as smartest on our planet, we might also cede control.

    5. "Machines don't have goals": Many AI systems are programmed to have goals and to attain them as effectively as possible.

    6. "AI isn't intrinsically malevolent": Correct - but its goals may one day clash with yours. Humans don't generally hate ants - but if we want to build a hydroelectric dam and there's an anthill there, too bad for the ants.

    7. "Humans deserve to be replaced": Ask any parent how they would feel about you replacing their child by a machine, and whether they'd like a say in the decision.

    8. "AI worriers don't understand how computers work": This claim was mentioned at the above-mentioned conference, and the assembled AI researchers laughed hard.

These passages from Yudkowsky and Tegmark provide just a tip of the iceberg of interesting insights in What to Think About Machines That Think. But, not surprisingly in a volume with so many contributions, there are also disappointments. I'm a huge fan of philosopher Daniel Dennett, and the title of his contribution (The Singularity - an urban legend?) raises expectations further, since one might hope that he could help rectify the curious situation where many writers consider the drastic AI development scenario referred to as an intelligence explosion or the Singularity to be extremely unlikely or even impossible, but hardly anyone (with Robin Hanson being the one notable exception) offers arguments for this position raising above the level of slogans and one-liners. Dennett, by opening his contribution with the paragraph...
    The Singularity - the fateful moment when AI surpasses its creators in intelligence and takes over the world - is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts - Elon Musk, Stephen Hawking, and David Chalmers, among others - and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn't it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?

    I think, on the contrary, that these alarm calls distract us from a more pressing problem...

...and then going on to talk about something quite different, turns his piece into an almost caricaturish illustration of the situation I just complained about.

I want to end by quoting, in full, the extremely short (even by this book's standards) contribution by physicist Freeman Dyson - not because it offers much (or anything) of substance (it doesn't), but because of its witticism:
    I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant.

    If I am right, then the whole question is irrelevant.

I like the humility - "as I often am" - here. Dyson pretty much confirms what I was convinced of all along, namely that he agrees with the message in my 2011 blog post Den oundgängliga trovärdighetsbedömningen: fallet Dyson that he ought to be read critically.2

Footnote

1) And my own views are mostly in good alignment with Yudkowsky's and with Tegmark's. Both of them are cited approvingly in the chapter on AI in my upcoming book Here Be Dragons: Science, Technology and the Future of Humanity (Oxford University Press, January 2016).

2) This is in sharp contrast to the general reaction to my piece among Swedish climate denialists; they seemed at the time to think that Dyson ought to be read un-critically, and that any suggestion to the contrary was deeply insulting (here is a typical example).

1 kommentar:

  1. Riktigt bra bloggpost! Den där sammanfattningen är verkligen en service jag uppskattar.

    /Daniel

    SvaraRadera