tisdag 22 oktober 2013

Guest post by David Sumpter: Why "intelligence explosion" and many other futurist arguments are nonsense

One of the criticisms regarding my blog that I hear most often - also from readers who otherwise seem to like it - is that I attach way too much credibility to crazy futurist scenarios, such as the Singularity or various other transhumanistic developments. While I do have the ambition to be as unbiased as I can given the available evidence, it may nevertheless well be that these critics are right, and that I have such a bias. Be that as it may, I don't think it can be wrong to offer a wider variety of perspectives by publishing some more Singularity-skeptical material on the blog. To this end I invited my highly esteemed colleague David Sumpter, professor of applied mathematics at Uppsala University, to explain why he is so unimpressed by all this futurist talk. He took up the challenge and wrote the following text. I'll respond to it in a separate blog post tomorrow. /OH

* * *

Why "intelligence explosion" and many other futurist arguments are nonsense

This argument for the above statement is going to be made as briefly as possible. It does not follow the usual standards of rigour I would apply in writing about science, in that I have not completely reviewed the literature. It is made after reading some texts brought to my attention by Olle Häggström. In particular, Facing the Intelligence Explosion by Luke Muehlhauser, followed by browsing a few of Nick Bostrom’s articles on the singularity and works on “Universal Artificial Intelligence” by Marcus Hutter.

I’ll start by saying that Muehlhauser’s book is fun and I enjoyed reading it. Bostrom’s articles are also well-written and thought provoking. It is an idea that has occurred to me many times before that I might be living in a computer simulation and it is nice to see the argument for different alternatives spelt out. There is nothing logically wrong (as far as I can see) with any of the arguments presented by either of these authors. My argument is that they are scientifically useless and not related to any sense data, i.e. they are nonsense. I’ll explain what I mean by nonsense.

Imagine you lived in Mesopotamia 5000 years ago. You have just seen your first wheeled chariot roll by. Amazing. Powered by horses it could reach almost dangerous speeds. “What does the future hold?”, you might say. Maybe one day we can replace the horses with a magical power that will drive these chariots? Maybe we can build massive track systems that cover the planet and we can travel at 20 times the speed of running? What an amazing, but also scary, future. If these chariots hit children they will be killed instantly. Where will the cattle graze? We’ll have to organise a safe network for these machines. What if these chariots can one day fly like birds? Then they can fly to the next town and drop bombs? Kill people. What if one of the bombs is so powerful it can wipe out whole cities? The future of the planet is at risk.

You are quite an insightful person to come up with all this. But what happens when you tell it to your peers? They quite rightly tell you it is all nonsense. They tell you to calm down and get on with taking in the harvest. Farming has just been invented and you have your work to do. But maybe you could use a chariot to carry in the crops? You get to work on this problem, and build a four-wheeled waggon. You have advanced science.

In the above story, I let you be pretty much correct about the future. You had a reasonably correct model M1 of what happened 5000 years later. Why then do I say that your peers were right to say you were talking nonsense? Because your rational peers are using Bayes' theorem, of course. P(M1 | Data) is very low. “What the hell can a chariot rolling past tell you about flying bomb machines?” they say. You revise your model a bit. If we can build a two wheeled chariot, maybe we can build a four wheeled version. This is model M2. Yep, there we go P(M2 | Data)>>P(M1 | Data) and we move in to the future. In fact P(M2 | Data) is so much greater than P(M1 | Data) that we can call M1 nonsense.

The fact M1 eventually proved to be true is irrelevant here. The question was whether the chariot rolling by was sufficient information to support your model. To put it slightly more technically: if we integrate over all the uncertainties in a model, all the parameters involved and their possible values, then the Bayes factor for M1 is much much lower than for M2, unless you invoke an unacceptable prior to your model. In order to avoid being nonsense a model should be reliable on the basis of sense data.

Now I don’t know if the "intelligence explosion" is true or not, but I can think of some models that are more probable. Maybe I can ask my smart phone to do something more than calling my wife and it will actually do it? Maybe a person-sized robot can walk around for two minutes without falling over when there is a small gust of wind? Maybe I can predict, within some reasonable confidence interval, climate change a number of years in to the future? Maybe I can build a mathematical model that predicts short-term economic changes? These are all plausible models of the future and we should invest in testing these models. Oh wait a minute... that’s convenient... I just gave a list of the type of engineering and scientific questions our society is currently working on. That makes sense!

The reason scientists work on these questions is that there is a feedback between models and data. We make models, we make predictions and test them against data, we revise the model and move forward. You can be a Bayesian or a frequentist or Popperian, emphasise deductive or inductive reasoning, or whatever, but this is what you do if you are a scientist. Scientific discoveries are often radical and surprising but they always rely on the loop of reasoning coupled with observation.

What scientists do not typically believe is that model-making alone can answer far off data-free questions. Well, that’s not quite true. It appears that some scientists do believe that it is all about tuning the model. For example, Marcus Hutter believes that by choosing the right formalism he can reason himself forward to defining the properties of a General Artificial Intelligence. And Luke Muehlhauser believes that by listing a few examples of limitations in human heuristics and noting that computers are good search engines that he can convince us that we "find ourselves at a crucial moment in Earth’s history. [...] Soon we will tumble down one side of the mountain or another to a stable resting place". I would prefer to follow the earlier chapters of Muehlhauser in using the power of rationality to avoid such ungrounded statements.

Is there anything wrong with discussing models with low probability given the data? No. Not when it is done for fun. Like all other forms of nonsense, from watching the Matrix, going to a Tarot card reader or contemplating what might happen when I die, it has its place and is certainly valuable to us as individuals. But this place and value is not related to serious scientific discourse, and the two should not be mixed up.

4 kommentarer:

  1. Toto, I've a feeling we're not in Mesopotamia anymore.

    SvaraRadera
  2. An Intelligence Explosion is a programmer's nightmare. It's a Pyrrhic victory and a doomsday scenario. It's something to be avoided. The atheist would also oppose the creation of a superhuman entity. It sounds like a recruitment attempt to get people to oppose elitism and authority.

    SvaraRadera
    Svar
    1. I agree with you that an intelligence explosion is a terribly terribly dangerous thing (although note that it might conceivably cut either way). In my opinion, we need to try to understand this danger, and this is part of the reason why I sometimes stand up against those (like David Sumpter) who try to rule out intelligence explosion from being a legitimate topic of serious scientific discussion.

      Radera
  3. We are at greater risk from those who would "deem" or "divine" the truth. It's what we should be on guard against. It is easy to be led astray sometimes. The fundamental premise of logic is that truth does not lead to falsehood. One needs to know the truth in advance for it to work properly. I doubt that machine logic would be any more immune from this shortcoming than we are. We are more likely to end up with a pretender than a god.

    SvaraRadera