I have been interested in the concept of existential risk for a while. Toby Ord defines it like this:
To establish how valuable it is to prevent an existential catastrophe in our lifetimes, we need to find out how long
humanity will last if we are successful.
More specifically, we can ask:
Given that no existential catastrophe occurs before the year 2100. What is the expected number of centuries before an existential catastrophe occurs?
I definitely don't know how to answer this question, but I will make an attempt at answering one aspect of it:
Can the answer be infinite?
My immediate thought when thinking about this, was that it would be impossible. My thinking was that any non-zero
probability of an existential catastrophe,
if repeated over enough centuries, would bring existential risk arbitrarily close to 100%.
Suppose that the chance for humanity to survive any given century is p < 1.
Then the chance to survive n centuries is:
We can now take a probability q arbitrarily close to zero, and:
Rearranging this shows that:
So it seems that surviving forever is impossible unless we can bring existential risk literally to zero,
which is something that Bayes' theorem does not permit, given a non-zero prior.
However, there's a trick.
Suppose that people in the future will always strive to reduce existential risk,
and that they are successful. We can model this as follows:
The strategy for humanity then becomes:
Many choices for \(f\) will work here, but we will analyse \(f(x) = x^r,\) where \(0 < r < 1\)
We can use as an example Toby Ord's opinion on existential risk this century. Namely \(\frac{1}{6} = 0.167\),
and arbitrarily pick \(r = 0.8\)
The chance to survive the 21st century then becomes \(\ a_{21} = 1 - 0.167 = 0.83\).
And the X-risk targets for next few centuries becomes:
Let's assume that \(a_0\) is the current century, and therefore that \(P(surviving\ century\ 0) = a_0\). Then we get that:
We can then look at the probability of surviving until century n:
Plugging in our formula for \(a_n\) then gives:
The exponent here is a geometric series, which means that it converges to a finite value \(s\) as \(n\) goes to infinity. The formula is \(s = \frac{1}{1-r}\)
We can then say that the chance to survive an arbitrary number of centuries can never go
below \({(a_{0})}^{s}\)
For our example with \(a_0 = 0.83\) and \(r = 0.8\), this probability becomes \({0.83}^{\frac{1}{1-0.8}} = 0.4\)
This possibility has weird implications for ethics. The strategy would, if successful (and depending on your moral
view), generate infinite value.
For instance: Is it better to pick a strategy where there is a 99% chance to survive
arbitrarily many centuries over instead one with a 1% chance?
Both would generate infinite value. So does that mean that they are equally good?
If you find these questions interesting, consider reading Nick Bostrom's paper "Infinite Ethics."