1. The general public already has the nuclear threat and the climate threat to worry about, and bringing up yet another global risk may overwhelm people and cause them to simply give up on the future. There may be something to this speculation, but to evaluate the argument's merit we need to consider separately the two possibilities of
(a) apocalyptic AI risk being real, and
(b) apocalyptic AI risk being spurious.
2. Pinker held forth a bunch of concerns that seemed more or less copy-and-pasted from the standard climate denialism discourse. These included the observation that the Millennium bug did not cause global catastrophe, whence (or so the argument goes) a global catastrophe cannot be expected from a superintelligent AGI (analogously to the oft-repeated claim that the old Greek's fear that the skies would fall down turing out to be unfounded shows that greenhouse gas emissions cannot accelerate global warming in any dangerous way), and speculations about the hidden motives of those who discuss AI risk - they are probably just competing for status and research grants. This is not impressive. See also yesterday's blog post by my friend Björn Bengtsson for more on this; it is to him that I owe the (in retrospect obvious) parallel to climate denialism.
3. All the apocalyptic AI scenarios involve the AI having bad goals, which leads Pinker to reflect on why in the world would anyone program the machine with bad goals - let's just not do that! This is essentially the idea of the so-called Friendly AI project (see Yudkowsky, 2008, or Bostrom, 2014), but what Pinker does not seem to appreciate is that the project is extremely difficult. He went on to ask why in the world anyone would be so stupid as to program self-preservation at all costs into the machine, and this in fact annoyed me slightly, because it happened just 20 or so minutes after I had sketched the Omohundro-Bostrom theory for how self-preservation and various other instrumental goals are likely to emerge spontaneously (i.e., without having them explicitly put into it by human programmers) in any sufficiently intelligent AGI.
4. In the debate, Pinker described (as he had done several times before) the superintelligent AGI in apocalyptic scenarios as having a typically male psychology, but pointed out that it can equally well turn out to have more female characteristics (things like compassion and motherhood), in which case everything will be all right. This is just another indication of how utterly unfamiliar he is with the literature on possible superintelligent psychologies. His male-female distinction in the general context of AGIs is just barely more relevant than the question of whether the next exoplanet we discover will turn out to be male or female.
- Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:
(A) The author does not know how to build AGI using present technology. The author does not know where to start.
(B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.
(C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.