onsdag 23 oktober 2013

No nonsense - my reply to David Sumpter

I am grateful to David Sumpter for his guest post Why "intelligence explosion" and many other futurist arguments are nonsense yesterday. He holds what I believe to be a very common view among bright and educated people who have come across the idea of developments in artificial intelligence eventually leading to what is known as an intelligence explosion or the Singularity (i.e., an extremely rapid escalation towards superintelligence levels, with profound consequences for humanity and the world). I think David articulates his view and his argument very clearly, but I also think the argument is wrong, and in the following I will explain why. Note that my ambition here is not to argue that an intelligence explosion is likely (on that topic I am in fact agnostic). Instead, I will limit the discussion to the more modest task of showing that David's argument fails.

David's conclusions essentially boil down to two claims, namely
    (1) the possibility and likelihood of a future intelligence explosion is not a scientific topic,
and
    (2) a future intelligence explosion is very unlikely.
This is the order in which I prefer to treat them, even though David discusses them in reverse order. Note that (1) and (2) are two separate claims, and that neither of them immediately implies the other.1

Concerning (1), here's how David spells out what is needed to call something science:
    We make models, we make predictions and test them against data, we revise the model and move forward. You can be a Bayesian or a frequentist or Popperian, emphasise deductive or inductive reasoning, or whatever, but this is what you do if you are a scientist. Scientific discoveries are often radical and surprising but they always rely on the loop of reasoning coupled with observation.
As a demarcation criterion for science, I think this is fair enough. What I disagree with, however, is David's judgement that all discussions of the possibility of a future intelligence explosion fail (and must fail) this criterion.

The first thing that needs to be stressed in this context is that contemporary thinking about the Singularity is not pure speculation in the sense of being isolated from empirical data. On the contrary, it is fed with data of many different kinds. Examples include (a) the observed exponential growth of hardware performance known as Moore's law, (b) the observation that the laws of nature have given rise to intelligent life at least once, and (c) the growing body of knowledge concerning biases in the human cognitive machinery that David somewhat nonchalantly dismisses as irrelevant. See, e.g., the book by Ray Kurzweil (2005) and the paper by Eliezer Yudkowsky (2013) for this and much more. No single one of these data implies an intelligence explosion on its own, but they all serve as input to the emerging theory on the topic, and they are all subject to refinement and revision, as part of "the loop of reasoning coupled with observation" that David talks about in the demarcation criterion above.

At this point, some readers (including David, I presume) will object that it is not statements about down-to-earth things like current hardware performance growth or biases in human cognition that need to be tested, but high-flying hypotheses like
    C1 = "an intelligent explosion is likely to happen around 2100".
C1 is clearly not amenable to direct testing - at least not now, in 2013. So are hypotheses like C1 unscientific?

Here, a comparison with the more familiar territory of climate science may be helpful. Climate science deals routinely with hypotheses such as Like C1, C2 cannot be directly tested today. But C2 builds deductively, via climate models, on various more directly observable and testable properties of, e.g., the greenhouse effect, the carbon cycle and the water vapor feedback mechanism. And since we accept not only induction but also deduction as a valid ingredient in science, no serious thinker rules out C2 from the realm of the scientific.2 And by the same token, C1 cannot reasonably be dismissed as unscientific. (Needless to say, since climate science is so incomparably more well-developed than that of a future intelligence explosion, a hypothesis like C2 stands on incomparably firmer ground than C1. But the fact that the study of intelligence explosion is such a young area does not make it unscientific.) I think this suffices to rebut David's case for claim (1).

Let me move on to David's claim (2) about the event of a future intelligence explosion being very unlikely. Let's call this event E1. Here David proceeds by comparing E1 to the event E2 of the future realization of some comparatively more pedestrian technological development such as "a person-sized robot can walk around for two minutes without falling over when there is a small gust of wind".3 David thinks (reasonably enough) that E1 is a more difficult and involved project to achieve than E2. Many times more difficult and involved. From this he concludes - and this is his non sequitur - that (under any probability model that reasonably well models the relevant aspects of the real world) the probability P(E1) is many times smaller than P(E2), whence P(E1) must be very small.

The trick employed by David here is simply invalid. It is just not true that a task T1 that is many times more difficult and involved than another task T2 must have a probability P(T1 is achieved) that is many times smaller than P(T2 is achieved). For a simple counterexample, imagine me on May 17, 2014, standing on the starting line of the Göteborgsvarvet half marathon. I have previously completed this race 16 times out of 16 attempts, and I am (let's assume) about as well-praperade as I usually am, apart from being just slightly older. Let T1 be the task of completing the full race (21,097.5 meters) and let T2 be the task of completing the first 500 meters of the race. T1 is many times more difficult and involved than T2. Yet, the probabilities that I achieve T1 and T2, respectively, do not differ all that dramatically: P(T1 is achieved) is about 0.97, while P(T2 is achieved) is about 0.995. So it is manifestly not the case that P(T1 is achieved) is many times smaller than P(T2 is achieved).

So David's case for (2) is based on a fallacy. His impulse to try to say something nontrivial about P(E1) is laudable, but a simple trick like the one above just won't do. After having thought about it for a few years, I am convinced that estimating P(E1) is a deep and difficult problem.4 If David seriously wants to shed light on it, I don't think he has any choice but to roll up his sleeves and get to work on the substance of the problem. A key part of the problem (but not the only one) seems to be the "return on cognitive investment" issue introduced on the first few pages of Yudkowsky (2013): which is closer to capturing the truth - the k<1 argument (page 3 of Yudkowsky's paper) or the k>>1 argument (page 5)?

Footnotes

1) If there is any direct link between them, I'd say it's this: if claim (1) is true, then claim (2) is unscientific.

2) A key word here is "serious". There are also climate denialists, who are all too happy to brand hypotheses like C2 as mere speculation and therefore unscientific. In case there are any such creatures among my readers, let me give a third example C3, borrowed from Footnote 3 in a 2008 paper of mine, namely
    C3 = "a newborn human who is immersed head-to-toe for 30 minutes in a tank of fuming nitric acid will not survive".
Hypothesis C3 has (to my knowledge) never been tested directly, and hopefully never will. Yet, it is fair to say that the scientific evidence for C3 is overwhelming, because it can be deduced from empirically well-established results in chemistry and human physiology.

3) Here I'm skipping his lovely detour through Mesopotamia.

4) Forecasting technology is, except in very short time perspectives, an extraordinarily difficult problem. Nassim Nicholas Taleb highlights, on p 172 of his provocative 2007 book The Black Swan, one of the difficulties:
    If you are a Stone Age historical thinker called on to predict the future in a comprehensive report for your chief tribal planner, you must project the invention of the wheel or you will miss pretty much all of the action. Now, if you can prophesy the invention of the wheel, you already know what a wheel looks like, and thus you already know how to build a wheel, so you are already on your way.
Note, however, that the difficulty of estimating the probability P(E1) does not imply that this probability is small.

22 kommentarer:

  1. 1) In my view, the intelligence explosion is a highly scientific issue, if not already today, then tomorrow...

    2) Aren't we already there? :)

    SvaraRadera
  2. I think your argument against 2 is unnecessary weak.
    The assumption that completing the whole race is "many times more difficult and involved" than completing the first 500 m is questionable. the complete race is essentially doing the first 500 m 42 times. Yes more difficult, due to stamina issues, but not more involved. Most of those that can run a few times 500 m at >8 km/h can also run 42 times that given food and drink. There is no reason to assume that P(T1) << P(T2).

    I think that a better argument against 2) is that the argument is like the creationist's fallacy: "creating the most simple cell is mindbogglingly difficult. You want me to believe that even more complex beings like us was created by pure chance?!"
    Technological advancement is not a random event it's a cumulative process. Creating the lunar capable Saturn 5 rocket is incredibly more difficult than the first liquid fuel rockets. Still it was done, step by step.

    SvaraRadera
  3. I don't think your argument holds Anders M., because you are comparing one thing that we have already observed, i.e. a cell, which needs to be explained with something else 'the singularity' which no-one has observed. Unless you thought my argument was that technological advancement does not exist or was created by intelligent design. I certainly didn't say that.

    SvaraRadera
    Svar
    1. My argument was against the P(E1) << P(E2) based on how hard E2 is to achieve at the moment.
      In 1919 someone could have argued "Goddard have a hard time making his rockets fly straight for 10 meters and he is the best there is. So going to the moon is an extremely improbable event since it is very much harder to do".
      Since technological advancement (like evolution) is cumulative I find the probability assumption unfounded. Going to the moon 50 years later might have been a very improbable event or a very likely event but I can not see how we can know which. Even less how one could judge that 1919.

      But I based my comment mainly on Häggströms counter argument. I might therefore also have misunderstood your argument.

      Radera
  4. I'll answer you Olle separately on the two points. On point 1, we are in agreement. All I am doing here is making a definition of nonsense as a threshold. You make the comparison to climate science which is “incomparably more well-developed” than intelligence explosion. The development of climate science is not primarily a result of people having more time to think harder about it, it is because there is more data on which to base reasonable conclusions. As your acid bathtub example says there are several branches of science in which the ground is even firmer developed, leaving possible hypotheses about intelligence explosion far behind. So, yes on the basis of current evidence, the intelligence explosion lies somewhere on a spectrum. There are other more extreme theories, for example, that the earth will end unexpectedly on this or that date, but the existence of even more nonsense that doesn’t tell us where to set the threshold.

    The choice of the word ‘nonsense’ is of course provocative. But I use it in the way one may discuss whether to fund research in, for example, climate science or in intelligence explosion. You can imagine someone sitting in such a position declaring that intelligence explosion “sounds like nonsense”. This is just their way of saying that there is little empirical grounding for the area. As soon as we are asked to associate a limited resource (i.e research funding) with the investigation of a hypothesis then its position on this spectrum is sharply focused. Throwing babies in baths of acid is of course unethical, but is also highly predictable in outcome. So would also fail a corresponding upper threshold.

    Point 2 is a longer answer, and there I think you misunderstand my argument (although I was not precise in the first place), so you will have to wait until tomorrow.

    SvaraRadera
  5. One central piece of evidence that is being put forward for an impeding intelligent explosion in this post is Moore's law. However, this do not constitute persuasive evidence in favor of the intelligence explosion hypothesis, either alone or in association with (b)-(c).

    Analogously to Moore's law, you could make the case for an exponential increase in average human life span or world population. Yet it is trivial to note that these facts do not allows us to arbitrarily extrapolate into the unknown. There are physical limitations that prevent world population from growing exponentially into the future: overcrowding, lack of food and so on. A similar problem face an extrapolation of average human life span. Factors that limit human life span include the laws of thermodynamics and cancer, to name to problems.

    There is therefore no reason to suppose that enormous increases in computation and intelligence can occur without hitting the ceiling first. In other words, it has not even been demonstrated that the intelligence explosion is a physical possibility, let alone a probably outcome. This is in stark contrast to the climate change analogy, as we know to within a good margin of error what kind of factors influence climate.

    Although completely devastating for the environment, we know that temperatures predicted to be experienced in 2100 are physically possible (we have short-term exposure to them at least once a year in Sweden). On the other hand, we have not experienced intelligence comparable to the intelligence level postulated to exist after the intelligence explosion.

    SvaraRadera
    Svar
    1. Two reactions to your comment, Emil, first a general point and then a more specific:

      1. I am sure you understand that to go from (a) "X has not been established" to (b) "there is no reason to take the possibility of X seriously" is a huge jump, and that there may be a huge middle ground where the evidence at hand warrants statement (a) but not (b). You don't do the jump explicitly in your comment, but there is something in your tone that makes me suspect (and please forgive me if I'm wrong here) that you are inclined towards such a jump-to-conclusions.

      2. In my recent blog post Reading the Hanson-Yudkowsky debate, 3rd bullet point, I quoted an old illustration by Yudkowsky to the Singularity idea - an illustration that is mathematically beautiful but extremely idealized and oversimplified. And I wrote this:

      "The simplicity of this argument makes it tempting to put forth when explaining the idea of an intelligence explosion to someone who is new to it. I have often done so, but have come to think it may be a pedagogical mistake, because it is very easy for an intelligent layman to come up with compelling arguments against the possibility of the suggested scenario, and then walk away from the issue thinking (erroneously) that he/she has shot down the whole intelligent explosion idea."

      Your comment suggests that even the mere mentioning of the words "Moore's law" are enough to trigger a similar reaction.

      Radera
    2. My argument is not that it has not been established (which by itself is enough to torpedo the entire idea c.f. skeptical replies to any untested "alternative" treatment), but rather that there is no reason to think it is even possible to begin with. This is because in every other field of inquiry, we know that physical limitations exists that prevent limitless run-away exponential escalation. What makes intelligence and computation different?

      I am disappointed that you did not provide any substantive rebuttal to my arguments and instead complained about tone and invented motives that I did not explicitly state.

      No worries, I have come to expect this from proponents of the technological singularity / intelligence explosion. I guess I will just have to accept that this is your equivalent of the Loch Ness monster. We all have them, I suppose.

      Radera
    3. Your talk of "limitless run-away exponential escalation" is a typical case of attacking straw men, Emil. All serious thinkers in this area (and here, as before, I exclude nutcases like Frank Tipler) accept that there are physical limitations and that exponential growth must eventually flatten out. The term "Singularity" is just a manner of speaking, in which parallels to the mathematical notion of a singularity should not be taken literally.

      Radera
    4. Even if you postulate a strictly linear growth, you are still faced with the problem of naive extrapolations. Curve fitting only takes you that far.

      In any case, why should we accept that the physical limitations are most likely to occur only after the development of a super-powerful AGI and not before? What evidence exists for the position that a super-powerful AGI is possible?

      Radera
    5. "Why should we accept that the physical limitations are most likely to occur only after the development of a super-powerful AGI and not before?"

      We don't need to just accept it. Whether or not it will happen is an open problem. You seem to be convinced you already know the answer, but not everyone is equally cocksure, and some of them actually try to figure it out by a variety of means (including curve-fitting and extrapolation, but not (or, I should say, not always) in the naive uncritical way that you suggest). Judging from your comments so far, you seem utterly unfamiliar with the literature. I recommend that you try and familiarize yourself with it (beginning, e.g., with the Hanson-Yudkowsky debate that I recommended in a blog post last week, where you will be treated to plethora of evidence and arguments for or against the likelihood of an intelligence explosion). That is most certainly a more constructlve way for you to proceed, compared to standing here in ignorance shouting words of abuse (Alternative medicine! Loch Ness monster!).

      Radera
  6. So regarding point 2 in the original post, I don’t think I implied that the Intelligence Explosion is very unlikely, just that discussion about it on the current data doesn’t make sense. In fact, I said explicitly that I don’t know if the Intelligence Explosion is true or not.

    When we work with models and data then we should understand our limitations of how much forward reasoning we can do. In the example of running you gave, I have a good model which is empirically validated on you and on many other people like you and it can be used to give sensible predictions of your running everything from 500 metres to a whole marathon. In this case, my model has a high probability of working and it would predict that both your hypotheses are likely to be true.

    The same is true for the development of walking robots. I have a model for the design of previous robots, which incorporates aspects of the design itself. I can assess this performance and design as well as that of other similar objects, like jumping robots etc., and predict how long my robot will stand up for next year. But the model is not the hypothesis I wish to test, it is the underlying understanding I have of the process and the method I use for assigning my level of certainty to hypotheses (This is what I meant by model in my original post). And my model says nothing about whether my robot will one day be completely autonomous.

    I think this distinction between model and hypothesis gives a practical way of answering the dichotomy you try to set up. I can imagine situations where you have lots of data but it is difficult to build a self-consistent model, but you do manage it, and you can convince yourself and others that you model is highly probable. This is Taleb’s wheel invention or the theory of natural selection or general relativity. Nowdays such achievements in science can be recognised with a Nobel prize. So I agree that the amount of time spent coming up with a model need not be correlated with its reliability once created. But there are also cases where we have no firm data on which to start building models on, and there are a vast number of plausible models arising from the available data. No-one is going to get a Nobel prize for a model of these things.

    My problem then is how to construct a model that can be used eventually to assign a probability to something like an intelligence explosion. This is where a great deal of uncertainty creeps in. Should I make a model based on developments in bio-engineering? Should I think about how to extrapolate Moore’s law in various ways? Should I be evaluating the development of recent algorithms for playing Jeopardy? I just don’t know. I don’t know where the eventual step leading to an Intelligence explosion could possibly come from, and nobody else appears to either. Each of the models I could propose would have a large number of parameters that I cannot estimate on the basis of my current data, introducing large uncertainty into my model and hence in to any predictions I would make from the model. I am left with a whole host of plausible models of potential General AI, some which predict an intelligence explosion, and some which do not.

    This is, unfortunately, why it is impossible for me to rise to your challenge of rolling up my sleeves and get to work on the substance. I would love to, but there is nothing substantive to work on. I have little data on which to build a model, nor has anyone supplied a concrete explanation of where to start. I could start working on a specific aspect, for example, automated learning for playing games or maybe models of social behaviour. But, actually, this is what I already do in my research (www.collective-behavior.com). One day this information might provide a step in the path towards the Intelligence Explosion, but I have no way of saying how or when it will do so.

    SvaraRadera
  7. Thanks for your comments, David - this one (15:22) and the one above (16:38). I'll answer them jointly.

    Concerning claim (1), you say 16:38 that "we are in agreement". Very good! I take this to mean that, as opposed to the impression I got from your blog post, you do not think that work on intelligent explosion hypotheses is automatically unscientific. This would, in turn, seem to mean that you back off somewhat from your blogpost's final statement that such work "is not related to serious scientific discourse, and the two should not be mixed up". What you now say (16:38, 2nd paragraph) is that if you were in a position (at VR, for instance) where you had to allocate limited research funds to various subjects, then you would be much more inclined to fund established fields like climate science, compared to work on intelligence explosion. That looks to me like a shift towards a much less extreme position than the one expressed in your blog post - a shift that I am happy to see! It might be overoptimistic of me to hope for a further shift. Still, let me just say that I hope that in case you ever find yourself in the imagined VR position, you will take into account that climate research funding currently exceeds intelligence explosion research funding by a factor at least 1000 and probably much more than that, and that it might make sense to avoid putting all eggs in one basket. (I'm a great fan of climate reseach, but I think it would make sense to also put a little bit of money into research on some of the other major threats to civilization that show up on some of our radars.)

    Concerning claim (2) that "a future intelligence explosion is very unlikely", you now (15.22, 1st paragraph) say that I misunderstood you and that you didn't mean to imply that. Well, very good! Let me first say what led me to erroneously conclude that you were implying (2), and then move on to what you say in (15:22, 2nd-6th paragraph).

    In your blog post, you took the view of a Mesopotamian, and claimed that "P(M1 | Data) [is so small] that we can call M1 nonsense", where M1 is the event that what we now know as 20th century automotive and airline technology is eventually realized, and Data is the knowledge available to the Mesopotamian. You then moved to our time, and made some comparisons that seemed analogous to what you said about Mesopotamia. In this analogy, I took M1 to be analogous to the event that E1 that an intelligence explosion eventually happens, and I thought the point of your exercise was that, just as "P(M1 | Data) [is so small] that we can call M1 nonsense", we analogously have that P(E1 | Data') is so small that we can call E1 nonsense, where Data' is the knowledge available to us today. But now you say that was not your intention. Fine.

    What remains is your pessimistic view (15:22, 2nd-6th paragraph) on the feasibility of intelligent explosion research. And I agree that intelligent explosion is an extraordinarily difficult research topic, so I do understand your decision to throw in the towel. But that doesn't mean that everyone else has to throw in the towel, and frankly speaking I find it a bit immodest of you to think that just because you personally cannot (currently) come up with promising research ideas in the area, the same applies to everyone else. Perhaps some of your pessimism can also be attributed to the fact (which you are upfront about in the beginning of your plog post) that you are unfamiliar with much of the literature on the topic. In particular, it seems to me that you haven't read the best piece of data-informed theorizing I am aware of in this area (namely Yudkowsky's 2013 paper).

    SvaraRadera
  8. You summarize point 1 in a way I can't disagree with, but I see nothing wrong with retaining the word 'nonsense'. In this context, I would add that probability of models tend to decrease exponentially with lack of data. This makes 1000 a small number. Despite my argument, it is in my opinion good that someone open-minded like you evaluates research. Too often the conservative threshold is set too near to what has already been done. I hope you can see that it is reasonable for me to write that without you seeing it as weakening my argument in this case. Intelligence Explosion is still nonsense and (at present) unscientific. We all make mistakes.

    I think I clarified what I meant by model now, and I don’t think my answer is just subjective “pessimism” but an argument that there is no known mechanism for extrapolating from little data to good prediction. My reading of Intelligence Explosion texts (and Yudkowsky's 2013 is a prime example) is that there is little focus on data and much on theoretical model-making, logical argument and historical name-dropping. This just won’t do if you want to be considered as science. Theoretical reasoning first requires hard data. Whatever these arguments say (and you are right that I haven't read through everything), they do not have the proper base in empirics, so I can’t consider them.

    Despite your repeated challenges to me and others to explain why we are so confident there is no data, I am afraid the onus is on you to provide the data. Its difficult to provide evidence for something that is missing. But I would suggest a visit to a Machine Learning conference and an informal chat with a few of the researchers there would be a good start. In many ways, modern computer science is in crisis at its lack of progress and high level of charlatanism in the area. This is my empirical observation.

    SvaraRadera
  9. We're only in the third round of our discussion, David, and it already shows clear signs of going around in unproductive circles. I thought I saw some progress in your previous two replies, but now you revert to your categorical statements about intelligence explosion research being "nonsense" and "unscientific".

    You say there is no data, and I say there is data. Now you say that in such a situation, the burden of proof is on me. Fair enough. But when you say "data", do you mean (a) data that inform intelligence explosion modelling and theory-building, or (b) data that conclusively demonstrate the possibility and likelihood of an intelligence explosion? Whichever is the case, there is nothing I can say at this point that will move our discussion forward. If you mean (a), then I already gave examples in my blog post, and there's plenty more in the Yudkowsky paper that you now claim to have read (although you also say that you "can't consider" his theoretical arguments). If you mean (b), then my answer is that (to the best of my knowledge), there is none - and that's why the possibility of an intelligence explosion is still an open problem.

    I readily admit, and have done so repeatedly in this discussion, that the state-of-the-art of intelligence explosion research is, despite a number of laudable initial attempts, still very poor compared to more established sciences. But the central question is (in my humble opinion) of such importance (the future of humanity may be at stake, Goddammit!) that it's worth the attempt to keep going at it. Your insistence on branding these attempts as "nonsense" is unhelpful.

    SvaraRadera
  10. Thanks for taking the time to reply Olle. I still think you have missed the point, so I’ll just reiterate it one last time. Of course the data I would like to see is type (a). We both know that (b) does not exist. The aim of a model is then to takes us from (a) to a reasonable prediction about (b). My claim is that the texts you reference are very much focussed on arguing from (a) to (b) in various ways, i.e. making models to show (b) from (a). If they were my PhD students, I would tell them to make sure they have got (a) right before they embark on such a task. There are no examples (I know of) in science of such a modelling process working in the absence of a good base in (a).

    For example, Charles Darwin’s theory of natural selection was the result of him sailing round the world for five years, talking to farmers about their breeding techniques and other such data-collecting activities. He then sat down and sketched out a tree of life. Scientific success stories share this common feature of data, model, prediction and experimental support.

    A simple test one can do is read “Origin of the Species” and look at how much space is spent discussing data carefully vs applying some sort of theory. Then do the same for one of the Intelligence Explosion articles you reference. And the theory/data imbalance in these latter articles is cannot be excused for lack of data on ‘intelligence’. There are in fact vast quantities of data from neurobiology, neurochemistry, from brain scans, genetic studies, from behavioural studies, from different types of learning algorithms, etc. that could be used as a possible base for data on working on general AI. Charles Darwin would have loved to have this data! But what you find as soon as you get in to this data is a massive number of small problems and unanswered questions. These questions are where the science lies, not in grand theorising. If someone needs to roll their sleeves up, then it is people claiming to do useful work in Intelligence Explosion.

    I don’t think anyone should take my word for it. If they are interested they should read things themselves, but they shouldn’t be sucked in by fancy sounding theory. They should apply tests like I do above, and other tests like you yourself have proposed, so they are able to compare “real science to poor imitations”.

    As much fun as it is to have these discussions with you, I am now going to have Fredags myskväll now, and on Monday I am going to roll up my sleeves and get on with doing scientific research. Thanks for the opportunity for writing in your blog. I think it shows great open-mindedness on your part. I really appreciate it and several of my colleagues have told me they enjoyed the discussion. Trevlig helg.

    SvaraRadera
    Svar
    1. Thanks, David, for patiently continuing to spell out your point of view, and for contributing to making my blog a more lively and interesting forum! I guess at this point, late Friday afternoon, it makes sense that we relax a bit and (at least for the time being) agree to disagree. Let me just offer one final proposal for an interpreatation of wherein our fundamental disagreement really lies:

      In your demarcation criterion (as I call it in my blog post), you write that scientific thinking can "emphasise deductive or inductive reasoning". But is any proportion between them acceptable? It will probably be hard to pinpoint a general rule for that, but perhaps we just have different ideas about what, in some cases, range of proportions between the two are acceptable. And perhaps I am just a bit more willing than you to expand the range somewhat, when faced with such an enormously important question and unable (at present) to find other ways to make meaningful progress.

      TGIF and trevlig helg!

      Radera
    2. "as soon as you get in to this data is a massive number of small problems and unanswered questions. These questions are where the science lies, not in grand theorising."

      Aha, this is the real disagreement.

      David Sumpter wants to avoid sounding unscientific by theorizing in the absence of concrete data - and this is a good social norm within the sciences for various reasons.

      The people who David Sumpter argues against, i.e. Nick Bostrom, Olle, etc, want to avoid the scenario where we first build a smarter than human intelligence, and THEN start wondering how to control it, because these people have realized that that would be very dangerous.

      I expect to see this debate play out many times over the next few decades; people like David probably represent the majority view in the hard sciences because of the strong social norm within the sciences against "grand theorizing".

      The solution is, I think, for people like David to realize that science has a norm against "grand theorizing" because that norm has been useful. In pretty much every field to date, there would have been no benefit to anticipating in advance what a technology would do before the details were fleshed out.

      However, because smarter than human intelligence involves a "winner takes all" dynamic where the first >human AI could probably undergo an intelligence explosion and form a singleton, we think that we really do benefit from theorizing in this case. So people like Olle and Nick Bostrom etc etc want permission to break the scientific norm against "grand theorising".

      Radera
    3. I think your comment is very much to the point, AlphaCeph (and I tried to express a similar sentiment in my later blog post Om trial-and-error).

      At first I wanted to object to your statement that we "want permission [from people like David] to break the scientific norm against 'grand theorising'" - hey, we do what we want, we don't need their permission! But on second thought, those of us wanting to save humanity from a malign intelligence explosion need, just as everyone else, food on our tables, so in order to keep up our work without starving, we actually do need to convince people like David - many of whom have much influence on science funding - that the work makes scientific sense. Not even the Future of Humanity Institute at the University of Oxford has a long-term stable funding situation.

      Radera
    4. And when I wrote (10:16 above) that "we actually do need to convince people like David", I should perhaps have added "lest we remain in the unsatisfactory situation (in which I currently am) where we do this in our spare time, while holding other jobs better suited to put food on our tables". I shouldn't complain too much, however, because in my case, that "other job" is a really nice one...

      Radera
  11. Writing was responsible for an information explosion in ancient Egypt. Personal messages could be sent via scribes from one part of the country of the country to another. It was a boon to education. It marks a boundary between prehistoric times and modern times. There was a new age of exploration. An attempt was made to circumnavigate Africa. Later under the Ptolemy's scribes copied the books of ships coming into port. The Museum of Alexandria was a marvel of its time. The copying of books by scribes led to the formation of publishing houses and the spread of knowledge beyond that of a priesthood. It's an example of the cumulative growth of knowledge over time and a form of accumulated wealth. This growth required a systematic study of nature and the world around us. Growth is likely to continue as long as we have new worlds and new fields to investigate. But there has to be a need-to-know for growth to continue.

    SvaraRadera
  12. Defining a superhuman machine begs the question in the intelligence explosion debate. How can we possibly win a contest against a machine that by definition knows all the right moves to make at the start of the game? Saying the machine is superhuman is another way of saying that we are doomed to fail. We may ask instead how do we "put the genie in the bottle" so that it can do our bidding? That's human nature. We measure progress by gains in our ability to harness the beast and make it work for us. We did this with agriculture, manufacturing, transportation, computation, etc. We will likely do the same with artificial intelligence and try to harness it's daemons. The risk that we run would be the establishment of an übermensch which in political terms is a dictatorship or an autocratic government. The ideal of the republic was created so that government would serve its citizens. Science fiction writers and Hollywood like to present alternative views of the future. I doubt the Colossus scenario is very likely. The law of diminishing returns would probably apply.

    SvaraRadera