Can Humanity Last Forever?

I have been interested in the concept of existential risk for a while. Toby Ord defines it like this:

"An existential catastrophe is the destruction of humanity’s longterm potential.", The Precipice (p. 37)

To establish how valuable it is to prevent an existential catastrophe in our lifetimes, we need to find out how long humanity will last if we are successful.
More specifically, we can ask:

Given that no existential catastrophe occurs before the year 2100. What is the expected number of centuries before an existential catastrophe occurs?

I definitely don't know how to answer this question, but I will make an attempt at answering one aspect of it:

Can the answer be infinite?

My immediate thought when thinking about this, was that it would be impossible. My thinking was that any non-zero probability of an existential catastrophe,
if repeated over enough centuries, would bring existential risk arbitrarily close to 100%.
Suppose that the chance for humanity to survive any given century is p < 1. Then the chance to survive n centuries is:

\(P(survive\ n\ centuries) = p^n\)

We can now take a probability q arbitrarily close to zero, and:

\(P(survive\ n\ centuries) = p^{n-log_p(q)+log_p(q)} = p^{n-log_p(q)} p^{log_p(q)} = p^{n-\frac{log(q)}{log(p)}}q\)

Rearranging this shows that:

\({P(survive\ n\ centuries) < q},\ if\ {n > \frac{log(q)}{log(p)}}\)

So it seems that surviving forever is impossible unless we can bring existential risk literally to zero, which is something that Bayes' theorem does not permit, given a non-zero prior.

However, there's a trick.
Suppose that people in the future will always strive to reduce existential risk, and that they are successful. We can model this as follows:

\(a_{n}=P(surviving\ century\ n | surviving\ century\ n-1) =\) probability of surviving century n, given that we get that far.

\(a_{n+1} = f(a_n)\), where \(f(x) > x\) for all \(0 < x < 1\)

The strategy for humanity then becomes:

Given that the chance of survival for people in the previous century was \(y\). Our task becomes to ensure our chance to survive this century becomes \(\geq f(y)\).

Many choices for \(f\) will work here, but we will analyse \(f(x) = x^r,\) where \(0 < r < 1\)

We can use as an example Toby Ord's opinion on existential risk this century. Namely \(\frac{1}{6} = 0.167\), and arbitrarily pick \(r = 0.8\)
The chance to survive the 21st century then becomes \(\ a_{21} = 1 - 0.167 = 0.83\). And the X-risk targets for next few centuries becomes:

Let's assume that \(a_0\) is the current century, and therefore that \(P(surviving\ century\ 0) = a_0\). Then we get that:

\(a_n = {(a_{n-1})}^{r} = {(a_{n-2})}^{r^2} = {(a_{n-3})}^{r^3} =\ ...\ = {(a_{0})}^{r^n}\)

We can then look at the probability of surviving until century n:

\(P(surviving\ n\ centuries) = P(surviving\ century\ n | surviving\ century\ n-1)\ \cdot P(surviving\ century\ n-1) = a_n \cdot P(surviving\ century\ n-1)\)
\(= a_n \cdot a_{n-1} \cdot P(surviving\ century\ n-2) = a_n \cdot a_{n-1} \cdot ... \cdot a_1 \cdot P(surviving\ century\ 0) = \prod_{i=0}^{n} a_i\)

Plugging in our formula for \(a_n\) then gives:

\(P(surviving\ n\ centuries) = \prod_{i=0}^{n}{{(a_{0})}^{r^i}} = {(a_{0})}^{\sum_{i=0}^{n} {r^i}}\)

The exponent here is a geometric series, which means that it converges to a finite value \(s\) as \(n\) goes to infinity. The formula is \(s = \frac{1}{1-r}\)

We can then say that the chance to survive an arbitrary number of centuries can never go below \({(a_{0})}^{s}\)
For our example with \(a_0 = 0.83\) and \(r = 0.8\), this probability becomes \({0.83}^{\frac{1}{1-0.8}} = 0.4\)

This possibility has weird implications for ethics. The strategy would, if successful (and depending on your moral view), generate infinite value.
For instance: Is it better to pick a strategy where there is a 99% chance to survive arbitrarily many centuries over instead one with a 1% chance?
Both would generate infinite value. So does that mean that they are equally good?

If you find these questions interesting, consider reading Nick Bostrom's paper "Infinite Ethics."