- (S1) all swans are white
- (S2) at least one non-white swan exists.
- (a) is moderately familiar with Popperian theory of science,
(b) is fond of the kind of asymmetry that appears in the all-swans-are-white example, and
(c) rejoices in claiming, whenever he1 encounters two competing hypotheses one of which he for whatever reasons prefers, some asymmetry such that the entire (or almost the entire) burden of proof is on proving the other hypothesis, and insisting that until a conclusive such proof is presented, we can take for granted that the preferred hypothesis is correct.
- (H1) Achieving superintelligence is hard - not attainable (other than possibly by extreme luck) by human technological progress by the year 2100,
- (H2) Achieving superintelligence is relatively easy - within reach of human technological progress, if allowed to continue unhampered, by the year 2100.
- Indeed neither do we take into account the non-zero probability of a black hole appearing at CERN and destroying the world...
- Surely you must understand the crucial disanalogy between the Large Hadron Collider black hole issue, and the AI catastrophe issue. In the former case, there are strong arguments (see Giddings and Mangano) for why the probability of catastrophe is small. In the latter case, there is no counterpart of the Giddings-Mangano paper and related work. All there is, is people like you having an intuitive hunch that the probability is small, and babbling about "mere logical possibility".
- Does one have to consider every possible scenario however crazy and compute its probability before one is allowed to say other things are more pressing?
- You have made it abundantly clear that superintelligence is a logical possibility, but this was preaching to the choir, most of us believe that anyway. But where is your evidence?