Hubbry Logo
Doomsday argumentDoomsday argumentMain
Open search
Doomsday argument
Community hub
Doomsday argument
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Doomsday argument
Doomsday argument
from Wikipedia
World population from 10,000 BC to AD 2000

The doomsday argument (DA), or Carter catastrophe, is a probabilistic argument that aims to predict the total number of humans who will ever live. It argues that if a human's birth rank is randomly sampled from the set of all humans who will ever live, it is improbable that one would be at the extreme beginning. This implies that the total number of humans is unlikely to be much larger than the number of humans born so far.

The doomsday argument was originally proposed by the astrophysicist Brandon Carter in 1983,[1] leading to the initial name of the Carter catastrophe. The argument was subsequently championed by the philosopher John A. Leslie and has since been independently conceived by J. Richard Gott[2] and Holger Bech Nielsen.[3]

Summary

[edit]

The premise of the argument is as follows: suppose that the total number of human beings who will ever exist is fixed. If so, the likelihood of a randomly selected person existing at a particular time in history would be proportional to the total population at that time. Given this, the argument posits that a person alive today should adjust their expectations about the future of the human race because their existence provides information about the total number of humans that will ever live.

If the total number of humans who were born or will ever be born is denoted by , then the Copernican principle suggests that any one human is equally likely to find themselves in any position of the total population .

is uniformly distributed on (0,1) even after learning the absolute position . For example, there is a 95% chance that is in the interval (0.05,1), that is . In other words, one can assume with 95% certainty that any individual human would be within the last 95% of all the humans ever to be born. If the absolute position is known, this argument implies a 95% confidence upper bound for obtained by rearranging to give .

If Leslie's figure[4] is used, then approximately 60 billion humans have been born so far, so it can be estimated that there is a 95% chance that the total number of humans will be less than 2060 billion = 1.2 trillion. Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, it can be estimated that the remaining 1140 billion humans will be born in 9120 years. Depending on the projection of the world population in the forthcoming centuries, estimates may vary, but the argument states that it is unlikely that more than 1.2 trillion humans will ever live.

Aspects

[edit]

Assume, for simplicity, that the total number of humans who will ever be born is 60 billion (N1), or 6,000 billion (N2).[5] If there is no prior knowledge of the position that a currently living individual, X, has in the history of humanity, one may instead compute how many humans were born before X, and arrive at say 59,854,795,447, which would necessarily place X among the first 60 billion humans who have ever lived.

It is possible to sum the probabilities for each value of N and, therefore, to compute a statistical 'confidence limit' on N. For example, taking the numbers above, it is 99% certain that N is smaller than 6 trillion.

Note that as remarked above, this argument assumes that the prior probability for N is flat, or 50% for N1 and 50% for N2 in the absence of any information about X. On the other hand, it is possible to conclude, given X, that N2 is more likely than N1 if a different prior is used for N. More precisely, Bayes' theorem tells us that P(N|X) = P(X|N)P(N)/P(X), and the conservative application of the Copernican principle tells us only how to calculate P(X|N). Taking P(X) to be flat, we still have to assume the prior probability P(N) that the total number of humans is N. If we conclude that N2 is much more likely than N1 (for example, because producing a larger population takes more time, increasing the chance that a low probability but cataclysmic natural event will take place in that time), then P(X|N) can become more heavily weighted towards the bigger value of N. A further, more detailed discussion, as well as relevant distributions P(N), are given below in the Rebuttals section.

The doomsday argument does not say that humanity cannot or will not exist indefinitely. It does not put any upper limit on the number of humans that will ever exist nor provide a date for when humanity will become extinct. An abbreviated form of the argument does make these claims, by confusing probability with certainty. However, the actual conclusion for the version used above is that there is a 95% chance of extinction within 9,120 years and a 5% chance that some humans will still be alive at the end of that period. (The precise numbers vary among specific doomsday arguments.)

Variations

[edit]

This argument has generated a philosophical debate, and no consensus has yet emerged on its solution. The variants described below produce the DA by separate derivations.

Gott's formulation: "vague prior" total population

[edit]

Gott specifically proposes the functional form for the prior distribution of the number of people who will ever be born (N). Gott's DA used the vague prior distribution:

.

where

  • P(N) is the probability prior to discovering n, the total number of humans who have yet been born.
  • The constant, k, is chosen to normalize the sum of P(N). The value chosen is not important here, just the functional form (this is an improper prior, so no value of k gives a valid distribution, but Bayesian inference is still possible using it.)

Since Gott specifies the prior distribution of total humans, P(N), Bayes' theorem and the principle of indifference alone give us P(N|n), the probability of N humans being born if n is a random draw from N:

This is Bayes' theorem for the posterior probability of the total population ever born of N, conditioned on population born thus far of n. Now, using the indifference principle:

.

The unconditioned n distribution of the current population is identical to the vague prior N probability density function,[note 1] so:

,

giving P (N | n) for each specific N (through a substitution into the posterior probability equation):

.

The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that N is a continuous variable (since it is very large) and integrate over the probability density from N = n to N = Z. (This will give a function for the probability that NZ):

Defining Z = 20n gives:

.

This is the simplest Bayesian derivation of the doomsday argument:

The chance that the total number of humans that will ever be born (N) is greater than twenty times the total that have been is below 5%

The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible about N, given that some particular function must be chosen. It is equivalent to the assumption that the probability density of one's fractional position remains uniformly distributed even after learning of one's absolute position (n).

Gott's "reference class" in his original 1993 paper was not the number of births, but the number of years "humans" had existed as a species, which he put at 200,000. Also, Gott tried to give a 95% confidence interval between a minimum survival time and a maximum. Because of the 2.5% chance that he gives to underestimating the minimum, he has only a 2.5% chance of overestimating the maximum. This equates to 97.5% confidence that extinction occurs before the upper boundary of his confidence interval, which can be used in the integral above with Z = 40n, and n = 200,000 years:

This is how Gott produces a 97.5% confidence of extinction within N ≤ 8,000,000 years. The number he quoted was the likely time remaining, N − n = 7.8 million years. This was much higher than the temporal confidence bound produced by counting births, because it applied the principle of indifference to time. (Producing different estimates by sampling different parameters in the same hypothesis is Bertrand's paradox.) Similarly, there is a 97.5% chance that the present lies in the first 97.5% of human history, so there is a 97.5% chance that the total lifespan of humanity will be at least

;

In other words, Gott's argument gives a 95% confidence that humans will go extinct between 5,100 and 7.8 million years in the future.

Gott has also tested this formulation against the Berlin Wall and Broadway and off-Broadway plays.[6]

Leslie's argument differs from Gott's version in that he does not assume a vague prior probability distribution for N. Instead, he argues that the force of the doomsday argument resides purely in the increased probability of an early doomsday once you take into account your birth position, regardless of your prior probability distribution for N. He calls this the probability shift.

Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self-inhibition. Rather, societies' success varies directly with population size. Von Foerster found that this model fits some 25 data points from the birth of Jesus to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, ...) were published in Science showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026. In fact, von Foerster did not imply that the world population on that day could actually become infinite. The real implication was that the world population growth pattern followed for many centuries prior to 1960 was about to come to an end and be transformed into a radically different pattern. Note that this prediction began to be fulfilled just in a few years after the "doomsday" argument was published.[note 2]

Reference classes

[edit]

The reference class from which n is drawn, and of which N is the ultimate size, is a crucial point of contention in the doomsday argument argument. The "standard" doomsday argument hypothesis skips over this point entirely, merely stating that the reference class is the number of "people". Given that you are human, the Copernican principle might be used to determine if you were born exceptionally early, however the term "human" has been heavily contested on practical and philosophical reasons. According to Nick Bostrom, consciousness is (part of) the discriminator between what is in and what is out of the reference class, and therefore extraterrestrial intelligence might have a significant impact on the calculation.[citation needed]

The following sub-sections relate to different suggested reference classes, each of which has had the standard doomsday argument applied to it.

SSSA: Sampling from observer-moments

[edit]

Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption (SSA): "that you should think of yourself as if you were a random observer from a suitable reference class". If the "reference class" is the set of humans to ever be born, this gives N < 20n with 95% confidence (the standard doomsday argument). However, he has refined this idea to apply to observer-moments rather than just observers. He has formalized this as:[7]

The strong self-sampling assumption (SSSA): Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class.

An application of the principle underlying SSSA (though this application is nowhere expressly articulated by Bostrom), is: If the minute in which you read this article is randomly selected from every minute in every human's lifespan, then (with 95% confidence) this event has occurred after the first 5% of human observer-moments. If the mean lifespan in the future is twice the historic mean lifespan, this implies 95% confidence that N < 10n (the average future human will account for twice the observer-moments of the average historic human). Therefore, the 95th percentile extinction-time estimate in this version is 4560 years.

Counterarguments

[edit]

We are in the earliest 5%, a priori

[edit]

One counterargument to the doomsday argument agrees with its statistical methods but disagrees with its extinction-time estimate. This position requires justifying why the observer cannot be assumed to be randomly selected from the set of all humans ever to be born, which implies that this set is not an appropriate reference class. By disagreeing with the doomsday argument, it implies that the observer is within the first 5% of humans to be born.

By analogy, if one is a member of 50,000 people in a collaborative project, the reasoning of the doomsday argument implies that there will never be more than a million members of that project, within a 95% confidence interval. However, if one's characteristics are typical of an early adopter, rather than typical of an average member over the project's lifespan, then it may not be reasonable to assume one has joined the project at a random point in its life. For instance, the mainstream of potential users will prefer to be involved when the project is nearly complete. However, if one were to enjoy the project's incompleteness, it is already known that he or she is unusual, before the discovery of his or her early involvement.

If one has measurable attributes that set one apart from the typical long-run user, the project doomsday argument can be refuted based on the fact that one could expect to be within the first 5% of members, a priori. The analogy to the total-human-population form of the argument is that confidence in a prediction of the distribution of human characteristics that places modern and historic humans outside the mainstream implies that it is already known, before examining n, that it is likely to be very early in N. This is an argument for changing the reference class.

For example, if one is certain that 99% of humans who will ever live will be cyborgs, but that only a negligible fraction of humans who have been born to date are cyborgs, one could be equally certain that at least one hundred times as many people remain to be born as have been.

Robin Hanson's paper sums up these criticisms of the doomsday argument:[8]

All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live.

Human extinction is distant, a posteriori

[edit]

The a posteriori observation that extinction level events are rare could be offered as evidence that the doomsday argument's predictions are implausible; typically, extinctions of dominant species happen less often than once in a million years. Therefore, it is argued that human extinction is unlikely within the next ten millennia. (Another probabilistic argument, drawing a different conclusion than the doomsday argument.)

In Bayesian terms, this response to the doomsday argument says that our knowledge of history (or ability to prevent disaster) produces a prior marginal for N with a minimum value in the trillions. If N is distributed uniformly from 1012 to 1013, for example, then the probability of N < 1,200 billion inferred from n = 60 billion will be extremely small. This is an equally impeccable Bayesian calculation, rejecting the Copernican principle because we must be 'special observers' since there is no likely mechanism for humanity to go extinct within the next hundred thousand years.

This response is accused of overlooking the technological threats to humanity's survival, to which earlier life was not subject, and is specifically rejected by most[by whom?Discuss] academic critics of the doomsday argument (arguably excepting Robin Hanson).

The prior N distribution may make n very uninformative

[edit]

Robin Hanson argues that N's prior may be exponentially distributed:[8]

Here, c and q are constants. If q is large, then our 95% confidence upper bound is on the uniform draw, not the exponential value of N.

The simplest way to compare this with Gott's Bayesian argument is to flatten the distribution from the vague prior by having the probability fall off more slowly with N (than inverse proportionally). This corresponds to the idea that humanity's growth may be exponential in time with doomsday having a vague prior probability density function in time. This would mean that N, the last birth, would have a distribution looking like the following:

This prior N distribution is all that is required (with the principle of indifference) to produce the inference of N from n, and this is done in an identical way to the standard case, as described by Gott (equivalent to = 1 in this distribution):

Substituting into the posterior probability equation):

Integrating the probability of any N above xn:

For example, if x = 20, and = 0.5, this becomes:

Therefore, with this prior, the chance of a trillion births is well over 20%, rather than the 5% chance given by the standard DA. If is reduced further by assuming a flatter prior N distribution, then the limits on N given by n become weaker. An of one reproduces Gott's calculation with a birth reference class, and around 0.5 could approximate his temporal confidence interval calculation (if the population were expanding exponentially). As (gets smaller) n becomes less and less informative about N. In the limit this distribution approaches an (unbounded) uniform distribution, where all values of N are equally likely. This is Page et al.'s "Assumption 3", which they find few reasons to reject, a priori. (Although all distributions with are improper priors, this applies to Gott's vague-prior distribution also, and they can all be converted to produce proper integrals by postulating a finite upper population limit.) Since the probability of reaching a population of size 2N is usually thought of as the chance of reaching N multiplied by the survival probability from N to 2N it follows that Pr(N) must be a monotonically decreasing function of N, but this doesn't necessarily require an inverse proportionality.[8]

Infinite expectation

[edit]

Another objection to the doomsday argument is that the expected total human population is actually infinite.[9] The calculation is as follows:

The total human population N = n/f, where n is the human population to date and f is our fractional position in the total.
We assume that f is uniformly distributed on (0,1].
The expectation of N is

For a similar example of counterintuitive infinite expectations, see the St. Petersburg paradox.

Self-indication assumption: The possibility of not existing at all

[edit]

One objection is that the possibility of a human existing at all depends on how many humans will ever exist (N). If this is a high number, then the possibility of their existing is higher than if only a few humans will ever exist. Since they do indeed exist, this is evidence that the number of humans that will ever exist is high.[10]

This objection, originally by Dennis Dieks (1992),[11] is now known by Nick Bostrom's name for it: the "Self-Indication Assumption objection". It can be shown that some SIAs prevent any inference of N from n (the current population).[12]

The SIA has been defended by Matthew Adelstein, arguing that all alternatives to the SIA imply the soundness of the doomsday argument, and other even stranger conclusions.[13]

Caves' rebuttal

[edit]

The Bayesian argument by Carlton M. Caves states that the uniform distribution assumption is incompatible with the Copernican principle, not a consequence of it.[14]

Caves gives a number of examples to argue that Gott's rule is implausible. For instance, he says, imagine stumbling into a birthday party, about which you know nothing:

Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her (tp=) 50th birthday. According to Gott, you can predict with 95% confidence that the woman will survive between [50]/39 = 1.28 years and 39[×50] = 1,950 years into the future. Since the wide range encompasses reasonable expectations regarding the woman's survival, it might not seem so bad, till one realizes that [Gott's rule] predicts that with probability 1/2 the woman will survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to bet on the woman's survival using Gott's rule. (See Caves' online paper below.)

Cave's example example exposes a weakness in J. Richard Gott's "Copernicus method" DA: it does not specify when the "Copernicus method" can be applied. But this criticism is less effective against more refined versions of the argument. Epistemological refinements of Gott's argument by philosophers such as Nick Bostrom specify that:

Knowing the absolute birth rank (n) must give no information on the total population (N).

Careful DA variants specified with this rule aren't shown implausible by Caves' "Old Lady" example above, because the woman's age is given prior to the estimate of her lifespan. Since human age gives an estimate of survival time (via actuarial tables) Caves' Birthday party age-estimate could not fall into the class of DA problems defined with this proviso.

To produce a comparable "Birthday Party Example" of the carefully specified Bayesian DA, we would need to completely exclude all prior knowledge of likely human life spans; in principle this could be done (e.g.: hypothetical Amnesia chamber). However, this would remove the modified example from everyday experience. To keep it in the everyday realm the lady's age must be hidden prior to the survival estimate being made. (Although this is no longer exactly the DA, it is much more comparable to it.)

Without knowing the lady's age, the DA reasoning produces a rule to convert the birthday (n) into a maximum lifespan with 50% confidence (N). Gott's Copernicus method rule is simply: Prob (N < 2n) = 50%. How accurate would this estimate turn out to be? Western demographics are now fairly uniform across ages, so a random birthday (n) could be (very roughly) approximated by a U(0,M] draw where M is the maximum lifespan in the census. In this 'flat' model, everyone shares the same lifespan so N = M. If n happens to be less than (M)/2 then Gott's 2n estimate of N will be under M, its true figure. The other half of the time 2n underestimates M, and in this case (the one Caves highlights in his example) the subject will die before the 2n estimate is reached. In this "flat demographics" model Gott's 50% confidence figure is proven right 50% of the time.

Self-referencing doomsday argument rebuttal

[edit]

Some philosophers have suggested that only people who have contemplated the doomsday argument (DA) belong in the reference class "human". If that is the appropriate reference class, Carter defied his own prediction when he first described the argument (to the Royal Society). An attendant could have argued thus:

Presently, only one person in the world understands the Doomsday argument, so by its own logic there is a 95% chance that it is a minor problem which will only ever interest twenty people, and I should ignore it.

Jeff Dewynne and Professor Peter Landsberg suggested that this line of reasoning will create a paradox for the doomsday argument:[9]

If a member of the Royal Society did pass such a comment, it would indicate that they understood the DA sufficiently well that in fact 2 people could be considered to understand it, and thus there would be a 5% chance that 40 or more people would actually be interested. Also, of course, ignoring something because you only expect a small number of people to be interested in it is extremely short sighted—if this approach were to be taken, nothing new would ever be explored, if we assume no a priori knowledge of the nature of interest and attentional mechanisms.

Conflation of future duration with total duration

[edit]

Various authors have argued that the doomsday argument rests on an incorrect conflation of future duration with total duration. This occurs in the specification of the two time periods as "doom soon" and "doom deferred" which means that both periods are selected to occur after the observed value of the birth order. A rebuttal in Pisaturo (2009)[15] argues that the doomsday argument relies on the equivalent of this equation:

,
where:
X = the prior information;
Dp = the data that past duration is tp;
HFS = the hypothesis that the future duration of the phenomenon will be short;
HFL = the hypothesis that the future duration of the phenomenon will be long;
HTS = the hypothesis that the total duration of the phenomenon will be short—i.e., that tt, the phenomenon's total longevity, = tTS;
HTL = the hypothesis that the total duration of the phenomenon will be long—i.e., that tt, the phenomenon's total longevity, = tTL, with tTL > tTS.

Pisaturo then observes:

Clearly, this is an invalid application of Bayes' theorem, as it conflates future duration and total duration.

Pisaturo takes numerical examples based on two possible corrections to this equation: considering only future durations and considering only total durations. In both cases, he concludes that the doomsday argument's claim, that there is a "Bayesian shift" in favor of the shorter future duration, is fallacious.

This argument is also echoed in O'Neill (2014).[16] In this work O'Neill argues that a unidirectional "Bayesian Shift" is an impossibility within the standard formulation of probability theory and is contradictory to the rules of probability. As with Pisaturo, he argues that the doomsday argument conflates future duration with total duration by specification of doom times that occur after the observed birth order. According to O'Neill:

The reason for the hostility to the doomsday argument and its assertion of a "Bayesian shift" is that many people who are familiar with probability theory are implicitly aware of the absurdity of the claim that one can have an automatic unidirectional shift in beliefs regardless of the actual outcome that is observed. This is an example of the "reasoning to a foregone conclusion" that arises in certain kinds of failures of an underlying inferential mechanism. An examination of the inference problem used in the argument shows that this suspicion is indeed correct, and the doomsday argument is invalid. (pp. 216-217)

Confusion over the meaning of confidence intervals

[edit]

Gelman and Robert[17] assert that the doomsday argument confuses frequentist confidence intervals with Bayesian credible intervals. Suppose that every individual knows their number n and uses it to estimate an upper bound on N. Every individual has a different estimate, and these estimates are constructed so that 95% of them contain the true value of N and the other 5% do not. This, say Gelman and Robert, is the defining property of a frequentist lower-tailed 95% confidence interval. But, they say, "this does not mean that there is a 95% chance that any particular interval will contain the true value." That is, while 95% of the confidence intervals will contain the true value of N, this is not the same as N being contained in the confidence interval with 95% probability. The latter is a different property and is the defining characteristic of a Bayesian credible interval. Gelman and Robert conclude:

the Doomsday argument is the ultimate triumph of the idea, beloved among Bayesian educators, that our students and clients do not really understand Neyman–Pearson confidence intervals and inevitably give them the intuitive Bayesian interpretation.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Doomsday argument is a probabilistic claim, originally formulated by astrophysicist in 1983, positing that an observer's random position among all humans who will ever exist implies a high likelihood of in the coming centuries rather than over vast future timescales. The argument employs self-sampling reasoning: assuming uniform prior beliefs over possible total human populations N and conditioning on one's birth rank n (with approximately 117 billion humans born since the emergence of modern Homo sapiens around 200,000–300,000 years ago, as estimated by demographers; with the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived, meaning the dead outnumber the living by about 14:1), the density becomes P(N|n) ∝ 1/N for N > n, yielding P(N|n) = n/N² under normalization, which assigns substantial probability mass to N values only modestly exceeding n. These figures, primarily from the Population Reference Bureau (PRB), are based on historical population sizes, birth rates, and life expectancy assumptions; estimates vary between 90–125 billion due to uncertainties in prehistoric data, but consensus places the total around 110–117 billion, with most births occurring in recent centuries due to rapid population growth post-Industrial Revolution. A common myth claims that more people are alive today than have ever died, but this is false—the dead have always vastly outnumbered the living, and projections suggest this ratio will persist even as population stabilizes around 10–11 billion later this century. This formulation, refined and popularized by philosopher John Leslie through Bayesian analysis and thought experiments like the "shooting room," suggests a greater than 95% chance that N < 20n, forecasting doomsday (defined as the cessation of human reproduction) by around the year 2200 if population stabilizes near modern levels. For most of human history, fewer than a few million people existed at any time, highlighting the recency of rapid population growth central to the argument's reference class assumptions. Subsequent variants, including J. Richard Gott's "delta-t" argument and extensions incorporating population growth rates, reinforce the core inference by treating birth order as a uniform draw from 0 to 1 across N, predicting median survival times on the order of current elapsed human history. The argument's defining characteristic lies in its reliance on first-principles observer selection effects without invoking external risks like astrophysical threats or technological failures, instead deriving doomy expectations directly from anthropic data. Despite its logical parsimony, it remains highly controversial, with detractors arguing flaws in the self-sampling assumption (e.g., neglecting reference class definitions or multiverse expansions) or prior distributions (e.g., favoring power-law tails over uniform bounds), though no unified refutation has emerged among philosophers and cosmologists. Proponents counter that such objections often presuppose optimistic futures incompatible with the observed n, while empirical tests—such as humanity's persistence to 2025 without evident contradiction—do not falsify the prediction, as it accommodates ongoing but finite growth.

Historical Origins

Brandon Carter's Initial Formulation (1974)

Brandon Carter, a theoretical astrophysicist at the University of Cambridge, initially formulated the doomsday argument as part of his application of anthropic reasoning to cosmological and biological questions during a presentation at the Kraków symposium on "Confrontation of Cosmological Theories with Observational Data" in February 1973, with the proceedings published in 1974. In his paper "Large Number Coincidences and the Anthropic Principle in Cosmology," Carter integrated the argument with the weak anthropic principle, which posits that the universe must permit the existence of observers like ourselves, and the Copernican principle, emphasizing that humans should not presume an atypical position in the sequence of all observers. This framing highlighted observer selection effects, where the fact of our existence as latecomers—after approximately 101010^{10} humans have already been born—constrains probabilistic inferences about the total human population NN. The core probabilistic reasoning assumes that an individual's birth rank nn (our approximate position in the human lineage, around the 101010^{10}th) is randomly sampled from the uniform distribution over 1 to NN, conditional on NnN \geq n. Carter employed a prior distribution P(N)1/NP(N) \propto 1/N for NnN \geq n, reflecting ignorance about scale in a manner consistent with scale-invariant reasoning in cosmology. The likelihood P(nN)=1/NP(n \mid N) = 1/N for NnN \geq n then yields a posterior P(Nn)1/N2P(N \mid n) \propto 1/N^2, normalized such that the cumulative probability P(NZn)=(Zn)/ZP(N \leq Z \mid n) = (Z - n)/Z for Z>nZ > n. This implies a high probability that NN is not vastly larger than nn; specifically, there is approximately 95% that N20nN \leq 20n. ![{\displaystyle PNleq20nN\\leq 20n={\frac {19}{20}}}}[float-right] Under assumptions of modest future population growth or stabilization, this translates to human extinction occurring within a timeframe on the order of 10910^9 years from the present, as the remaining human births would deplete without exceeding the bounded total. Carter's approach thus served as an early illustration of how self-selection among observers biases expectations away from scenarios with extraordinarily long human histories, privileging empirical positioning over optimistic priors about indefinite survival. This initial presentation laid the groundwork for later elaborations but remained tied to first-principles probabilistic updating under anthropic constraints, without invoking multiverse or infinite measures.

John Leslie's Elaboration and Popularization (1980s–1990s)

Philosopher John Leslie substantially expanded Brandon Carter's initial doomsday argument formulation during the late 1980s and early 1990s, transforming it from an esoteric probabilistic observation into a prominent tool for assessing risks. In works such as his 1989 contributions and subsequent papers, Leslie emphasized the argument's reliance on self-locating uncertainty about one's position in the total sequence of human observers, positing that the low observed birth rank—approximately the 60-70 billionth human—indicates a modest total human population rather than an astronomically large one implied by indefinite survival. This elaboration countered optimistic projections by conditioning probabilities on actual existence rather than hypothetical vast futures, aligning with a view that prioritizes observable data over unsubstantiated assumptions of perpetual growth. Leslie popularized the argument through accessible thought experiments, notably the urn analogy, wherein an observer unaware of drawing from a small urn (10 tickets) or large one (millions) who selects an early-numbered ticket rationally infers the smaller total, mirroring humanity's early temporal position as evidence against scenarios of trillions more future humans. He detailed this in his 1993 paper "Doom and Probabilities," defending it against critiques like the possibility of selection biases by invoking Bayesian updating based on empirical observer ranks, and argued that dismissing the inference requires rejecting standard . These analogies rendered the argument intuitive, shifting focus from abstract cosmology to practical implications for species longevity. Culminating in his 1996 book The End of the World: The Science and Ethics of , Leslie integrated the doomsday reasoning with analyses of anthropogenic threats, estimating a substantial probability—around one in three for by the third millennium—that doomsday looms soon unless risks are mitigated, without presupposing priors favoring eternal persistence. He critiqued overreliance on technological salvation narratives, advocating instead for precautionary measures grounded in the argument's probabilistic caution, and linked it to ethical duties to by highlighting how ignoring early-observer status underestimates odds from events like nuclear conflict or environmental collapse. This work elevated the doomsday argument in philosophical discourse on principles and existential hazards, influencing subsequent debates on survival probabilities.

J. Richard Gott's Independent Development (1993)

In 1993, astrophysicist III published "Implications of the Copernican Principle for Our Future Prospects" in Nature, independently deriving a probabilistic argument akin to the doomsday argument by assuming humans occupy a typical, non-privileged position within the total span of human existence. Gott framed this under the , positing that observers should expect to find themselves neither unusually early nor late in any phenomenon's history, without relying on specific priors about its total length. He illustrated the approach with temporal examples, such as a hypothetical random visit to the New York in 1964 shortly after its opening, where the observed elapsed time since inception (t_p) implied a high likelihood of brief remaining duration, consistent with the fair's actual demolition the following year. Gott's treats the observer's position as ly distributed over the total duration T, yielding a posterior distribution for T given elapsed time t that reflects a "vague" prior over logarithmic scales of duration, effectively P(T) \propto 1/T. The likelihood P(t|T) = 1/T for T > t then produces P(T|t) \propto 1/T^2. Integrating this posterior, the probability that total duration satisfies N \leq 20n (where n analogs elapsed "units," such as births or time) is 95%, or P(N \leq 20n) = 19/20. For a 95% confidence interval excluding the outermost 2.5% tails of the uniform fraction f = t/T, the remaining duration falls between t/39 and 39t. Applied to humanity, Gott adapted this to cumulative human births as the measure of "," estimating around 50–60 billion humans born by and treating the current observer's birth rank as randomly sampled from total N. This yields a 95% probability that fewer than about 19–39 times that number remain unborn, implying within roughly 8,000 years assuming sustained birth rates of approximately 100 million per year. Gott emphasized this as a first-principles Bayesian update, avoiding strong assumptions about by relying on the self-sampling uniformity and the vague logarithmic prior to derive conservative bounds on future prospects.

Core Logical Framework

Basic Probabilistic Reasoning

The basic probabilistic reasoning of the Doomsday argument treats an individual's birth rank n among all humans who will ever exist as a random sample uniformly drawn from the integers 1 to N, where N denotes the unknown total number of humans. Observing n—empirically estimated at approximately 117 billion based on historical birth records through late 2025—serves as data that updates beliefs about N toward smaller values, as large N would make such an "early" rank unlikely under the sampling assumption. Demographers estimate that approximately 117 billion humans have been born since the emergence of modern Homo sapiens around 200,000–300,000 years ago. With the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived, meaning the dead outnumber the living by about 14:1. These figures, primarily from the Population Reference Bureau (PRB), highlight that early human history featured very slow growth and high mortality (life expectancy often ~10–30 years), with most births occurring in recent centuries amid the population explosion post-Industrial Revolution. In Bayesian terms, the likelihood P(n|N) equals 1/N for Nn (and 0 otherwise), reflecting the sampling. A scale-invariant prior P(N) ∝ 1/N for Nn—chosen for its lack of arbitrary scale preference in the absence of other —yields a posterior P(N|n) ∝ 1/N2. In the continuous approximation, normalization gives P(N|n) = n / N2 for Nn, and the cumulative distribution follows as P(NZ | n) = 1 - n/Z for Zn. This posterior implies high probability for N modestly exceeding n: for instance, P(N ≤ 20n | n) = 19/20 = 0.95. With n ≈ 1.17 × 1011, total N < 2.34 × 1012 at 95% posterior probability, constraining future births to under 2 despite past cumulative totals. The logic incorporates an observer selection effect: birth ranks beyond N are impossible, so conditioning on existence biases against scenarios with small N and late ranks, but the observed relatively early n (as a fraction of potential vast N) countervails by favoring bounded totals. Empirical demographic , including decelerating global birth rates (from 140 million annually in 2015–2020 toward projected peaks near 141 million by 2040–2045 before decline), render assumptions of indefinite or infinite N empirically unmotivated and inconsistent with observed trends toward population stabilization. Counterarguments positing expansions, such as interstellar colonization yielding unbounded humans, lack causal mechanisms grounded in current technological or biological constraints and fail to override the update from sampled n.

Key Assumptions: Random Sampling and Observer Selection

The Doomsday argument hinges on the self-sampling assumption (SSA), which holds that a given observer should reason as if they constitute a randomly selected member from the aggregate set of all observers within the pertinent reference class. In its canonical formulation, this entails viewing one's birth order—estimated at approximately the 117 billionth human—as drawn uniformly at random from the interval spanning the first to the Nth human, where N denotes the ultimate total human population. Demographers estimate that approximately 117 billion humans have been born since the emergence of modern Homo sapiens around 200,000–300,000 years ago. With the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived, meaning the dead outnumber the living by about 14:1. These figures, primarily from the Population Reference Bureau (PRB), highlight that early human history featured very slow growth and high mortality (life expectancy often ~10–30 years), with most births occurring in recent centuries amid the population explosion post-Industrial Revolution. This random sampling premise presupposes an equiprobability across individuals (or, in some variants, observer-moments) without bias toward temporal position, thereby enabling Bayesian updating on the evidence of one's ordinal rank to constrain plausible values of N. Critics of alternative anthropic principles, such as the self-indication assumption, argue that SSA aligns more closely with causal realism by conditioning solely on realized observers rather than potential ones, avoiding inflation of probabilities for unobserved worlds. Complementing this is the observer selection effect, whereby the very act of self-observation filters evidentiary scenarios to those permitting the observer's existence and capacity to deliberate on such matters. In the Doomsday context, this effect underscores that empirical data—such as the observed human population to date—conditions probabilistic inferences, privileging hypotheses under which an early-to-mid sequence observer like oneself emerges with high likelihood, as opposed to those mandating vast posteriors where such positioning would be anomalously improbable. This selection mechanism counters dismissals invoking unverified multiplicities (e.g., simulated realities or infinite multiverses), which might dilute the sampling uniformity by positing countless non-actual duplicates; instead, it enforces a parsimonious focus on the concrete causal chain yielding detectable evidence. Empirical grounding derives from elementary Bayesian principles: the likelihood P(n|N) approximates 1/N under uniform sampling, updating a prior distribution over N without presupposing extended futures or exotic physics. Thus, the argument's validity pivots on these assumptions' alignment with probabilistic realism, where observer-centric evidence rigorously narrows existential timelines absent ad hoc expansions of the .

Role of Reference Classes in the Argument

The reference class in the Doomsday argument represents the total population of observers—ordinarily defined as all humans who will ever exist—from which one's own existence is treated as a random draw ordered by birth rank. This class forms the foundation for the probabilistic inference, as the observer's position n within it updates beliefs about the overall size N, yielding a posterior distribution concentrated around values of N comparable to n rather than vastly exceeding it. Brandon Carter's formulation specified the class in terms of human observers capable of self-referential temporal awareness, rooted in demographic patterns of births rather than speculative extensions to non-human or hypothetical entities. John Leslie reinforced this by insisting on a reference class aligned with causal and empirical continuity, such as the sequence of all births, to preserve the argument's predictive power against doomsday; he cautioned against classes either too narrow (e.g., limited to modern eras) or excessively broad (e.g., encompassing undefined posthumans), which could arbitrarily weaken the sampling assumption. Cumulative births, estimated at 117 billion as of late 2025, place contemporary individuals around the 95th under uniform priors, empirically favoring classes at the scale over more abstract ones that ignore observed . Demographers estimate that approximately 117 billion humans have been born since the emergence of modern Homo sapiens around 200,000–300,000 years ago. With the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived, meaning the dead outnumber the living by about 14:1. These figures, primarily from the Population Reference Bureau (PRB), highlight that early human history featured very slow growth and high mortality (life expectancy often ~10–30 years), with most births occurring in recent centuries amid the population explosion post-Industrial Revolution. A key debate concerns the granularity of the reference class, pitting discrete units like individual human lives (tied to birth events) against continuous observer-moments (each instance of subjective experience). The birth-based class, central to Carter and versions, implies a finite total N on the order of 10-20 times current cumulative births to render one's rank typical, consistent with historical growth data showing exponential but decelerating rates since the . Observer-moment classes, by contrast, could permit longer futures if future observers accrue more moments per life (e.g., through or enhanced ), yet this hinges on unverified assumptions about experiential rates, which empirical pegs at roughly constant for humans—about 3 billion seconds of consciousness per lifetime—without causal evidence for drastic future increases that would dilute the doomsday signal.

Formal Variants and Extensions

Self-Sampling Assumption (SSA) Approach

The Self-Sampling Assumption (SSA) posits that a given observer should reason as if they are a randomly selected member from the actual set of all observers in the relevant reference class, such as all s who will ever exist. This approach treats the observer's position within the sequence of births as uniformly distributed across the total number, conditional on the total N being fixed. Applied to the Doomsday Argument, SSA implies that discovering one's birth rank n—estimated at approximately 117 billion for a typical human born around 2025—provides evidence favoring smaller values of N, as early ranks are more probable under small-N hypotheses. Formally, SSA yields a likelihood function where the probability of observing birth rank n given total humans N is P(nN)=1NP(n \mid N) = \frac{1}{N} for nNn \leq N and 0 otherwise, reflecting uniform random sampling from the realized population. To compute the posterior P(Nn)P(N \mid n), a prior on N is required; a scale-invariant prior P(N)1NP(N) \propto \frac{1}{N} (Jeffreys prior for positive scale parameters) is often employed to reflect ignorance about the order of magnitude of N. The posterior then becomes P(Nn)=nN2P(N \mid n) = \frac{n}{N^2} for NnN \geq n, derived via Bayes' theorem:
P(Nn)=P(nN)P(N)P(n)1N1N=1N2,P(N \mid n) = \frac{P(n \mid N) P(N)}{P(n)} \propto \frac{1}{N} \cdot \frac{1}{N} = \frac{1}{N^2},
normalized over NnN \geq n where the integral nnN2dN=1\int_n^\infty \frac{n}{N^2} \, dN = 1.
The cumulative distribution under this posterior is P(Nknn)=11kP(N \leq k n \mid n) = 1 - \frac{1}{k} for k1k \geq 1, obtained by integrating:
P(Nxn)=nxnN2dN=n[1N]nx=1nx.P(N \leq x \mid n) = \int_n^x \frac{n}{N^2} \, dN = n \left[ -\frac{1}{N} \right]_n^x = 1 - \frac{n}{x}.
Setting x=knx = k n yields the result. Thus, the posterior is N2nN \approx 2n (where P(N2nn)=0.5P(N \leq 2n \mid n) = 0.5), and there is a 95% probability that N<20nN < 20n. For n ≈ 1.17 × 10^{11}, this predicts a median total human population of roughly 2.34 × 10^{11}, implying a substantial chance of extinction within centuries, assuming birth rates of order 10^8 per year.
In variants incorporating successive sampling—such as the Strong SSA (SSSA), which applies sampling to observer-moments rather than static observers—SSA reinforces doomy posteriors by modeling births as a sequential process, where early positions in a growing population still favor total durations not vastly exceeding current elapsed time. This contrasts with priors expecting indefinitely long civilization survival, as the observed early rank updates strongly against such expansive scenarios under random sampling from the realized total.

Self-Indication Assumption (SIA) Approach

The self-indication assumption (SIA) in anthropic reasoning posits that, conditional on one's existence as an observer, hypotheses predicting a larger number of observers should receive higher prior probability, as worlds or scenarios with fewer observers (including those with none) contribute negligibly to the pool from which the observer is sampled. In the doomsday argument, this translates to weighting possible total human population sizes NN by NN itself in the prior distribution, since larger NN implies more potential observers like oneself. Demographers estimate that approximately 117 billion humans have been born since the emergence of modern Homo sapiens around 200,000–300,000 years ago. With the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived. This means the dead outnumber the living by about 14:1. These figures come primarily from the Population Reference Bureau (PRB), which calculates total births based on historical population sizes, birth rates, and life expectancy assumptions. Early human history featured very slow growth and high mortality (life expectancy often ~10–30 years), so most births occurred in recent centuries amid the population explosion post-Industrial Revolution. A common myth claims more people are alive today than have ever died (or that 75% of all humans ever born are currently living). This is false—the dead have always vastly outnumbered the living, and projections suggest this ratio will persist even as population stabilizes around 10–11 billion later this century. Estimates vary (90–125 billion total ever born) due to uncertainties in prehistoric data, but consensus places the figure around 110–117 billion. The calculation highlights the recency of rapid population growth: for most of human history, fewer than a few million people existed at any time. Unlike the self-sampling assumption, which treats the observer as randomly drawn from the actual realized population and yields a sharp update toward smaller NN upon observing an early birth rank nn, SIA dampens this update by a priori disfavoring small-NN hypotheses due to their low observer count. Under SIA, the posterior P(Nn)P(N \mid n) can be derived using Bayes' theorem, where the likelihood P(nN)=1/NP(n \mid N) = 1/N for NnN \geq n (assuming uniform random birth rank within the population) combines with an SIA-adjusted prior P(N)Nπ(N)P(N) \propto N \cdot \pi(N), with π(N)\pi(N) a base prior (e.g., flat or logarithmic). For a flat base prior π(N)1\pi(N) \propto 1, this yields P(Nn)n/N2P(N \mid n) \propto n / N^2 in discrete approximations with large upper cutoff, concentrating probability mass toward larger NN relative to SSA equivalents and implying a median total population substantially exceeding nn—often by orders of magnitude—while still imposing finite bounds under proper priors. Ken Olum applied SIA to argue that the doomsday argument fails, as an early rank nn (e.g., approximately the 117 billionth human as of 2025) becomes expected in vast populations, where the abundance of observer-slots outweighs the uniformity within any single NN. This approach inherently rebuts non-existence objections to small-NN worlds, as SIA assigns them zero measure absent observers, privileging observer-rich scenarios without needing additional causal constraints. Critics, including Nick Bostrom and Milan Ćirković, counter that SIA's observer-weighting risks inflating probabilities for uncaused or maximally observer-proliferating hypotheticals, potentially overcounting non-actualized possibilities in the reference class without empirical grounding. In doomsday contexts, this can lead to underconstrained optimism if base priors permit arbitrarily large NN, though SIA retains predictive constraint by rejecting infinite or observer-less null hypotheses more decisively than SSA. Variants attempting reconciliation, such as those incorporating explicit null-world measures to temper extreme large-NN favoritism, aim to balance SIA's anti-doom bias while preserving its rejection of empty scenarios, yielding intermediate survival probabilities that cap runaway expansion without reverting to SSA's acuity.

Other Mathematical Formulations and Modifications

J. Richard Gott III developed an independent formulation in 1993 using a scale-invariant prior for the total duration TT of humanity, assuming P(T)1/TP(T) \propto 1/T to assign equal probability across logarithmic intervals of TT. This prior avoids favoring specific scales and leads to a posterior distribution where, observing elapsed time tt, there is a 50% probability that the remaining duration exceeds tt, and a 95% probability that TT lies between t/39t/39 and 39t39t. Applied to human population around 1993 (roughly 5.5 billion), this implies with 95% confidence a total between approximately 140 million and 215 billion individuals. Quantum modifications incorporate the many-worlds interpretation of quantum mechanics, where observer counts branch across parallel universes, potentially inflating future observer measures. However, formulations argue that self-locating uncertainty—regarding one's position in the branching structure—preserves the core doomsday update, as low-measure branches with few observers remain improbable under observer selection effects. For instance, in a 2012 analysis, the argument holds because civilizations in sparse-observer worlds (early in history) are atypical given the total measure of observers across all branches. Other extensions use power-law priors P(N)=k/NαP(N) = k / N^\alpha with 0<α<10 < \alpha < 1 to model unbounded growth while ensuring proper normalization, yielding broader confidence intervals for total population NN compared to uniform or logarithmic cases. Recent adjustments for accelerating growth incorporate time-varying birth rates b(t)b(t), modifying the likelihood P(nN)P(n \mid N) from uniform 1/N1/N to 0nb(t)dt/0Nb(t)dt\int_0^n b(t) dt / \int_0^N b(t) dt, which tempers doomsday predictions if growth rates peak and decline. For AI observers, formulations extend the reference class to include digital minds, weighting by observer-moments to account for potentially explosive posthuman expansion, though this dilutes human-centric estimates without altering the probabilistic framework.

Predicted Outcomes and Implications

Estimates of Total Human Population and Timeline to Extinction

The Doomsday argument estimates the total number of humans ever born, denoted NN, by treating the observer's birth rank nn—approximately 117 billion as of late 2025—as a random sample from 1 to NN. Under the self-sampling assumption with a scale-invariant prior P(N)1/NP(N) \propto 1/N, the posterior yields P(N>knn)=1/kP(N > kn \mid n) = 1/k for k>1k > 1. Thus, there is a 95% probability that N<20nN < 20n, or fewer than 2.34 trillion humans in total. The estimate places N2nN \approx 2n, around 234 billion individuals. These bounds imply limited future births, on the order of 117 billion to 2.2 trillion additional humans. Translating NN into an extinction timeline depends on projected birth trajectories. data forecast a global peak of 10.3 billion in the 2080s, with annual births declining from current levels of about 140 million toward replacement rates of 100–150 million per year under stabilization. The argument constrains long-term persistence, suggesting that sustaining such levels is improbable beyond the bounds on NN. If growth stabilizes near current sizes with replacement , the additional births would be exhausted in roughly 800–1,200 years at rates of 100–140 million annually, pointing to a 50% chance of by approximately 2800–3200 CE. For the 95% upper bound, additional births up to would extend the timeline to several millennia under stabilizing conditions, but the assigns substantial weight to earlier termination. Even with optimistic priors, such as P(N)1/NαP(N) \propto 1/N^\alpha for 0<α<10 < \alpha < 1, which shift estimates toward larger NN, calculations indicate over 50% risk of within centuries, as the posterior still favors relatively modest expansions before depletion. This aligns with causal pathways enabling rapid population collapse, updating empirical projections like UN forecasts downward to reflect higher near-term extinction probabilities.

Integration with Existential Risk Assessments

The Doomsday Argument (DA) intersects with existential risk (x-risk) assessments by imposing anthropic priors that disfavor scenarios of vast future human populations, such as trillions of individuals across millennia, unless offset by compelling evidence of robust long-term survival mechanisms. Demographers estimate that approximately 117 billion humans have been born since the emergence of modern Homo sapiens around 200,000–300,000 years ago. With the world population at about 8.2 billion in late 2025, people alive today represent roughly 7% of all humans who have ever lived. This means the dead outnumber the living by about 14:1. These figures, primarily from the Population Reference Bureau (PRB), are based on historical population sizes, birth rates, and life expectancy assumptions. Early human history featured very slow growth and high mortality (life expectancy often ~10–30 years), so most births occurred in recent centuries amid the population explosion post-Industrial Revolution. A common myth claims more people are alive today than have ever died (or that 75% of all humans ever born are currently living), but this is false—the dead have always vastly outnumbered the living, and projections suggest this ratio will persist even as population stabilizes around 10–11 billion later this century. Estimates vary (90–125 billion total ever born) due to uncertainties in prehistoric data, but consensus places the figure around 110–117 billion. The calculation highlights the recency of rapid population growth: for most of human history, fewer than a few million people existed at any time. In fields like and future studies, DA suggests that humanity's current birth rank—approximately the 117 billionth human—implies a high cumulative probability of within centuries, challenging models assuming annual x-risk rates below 10^{-5} to enable interstellar expansion or indefinite persistence. This prior aligns with non-negligible near-term catastrophe probabilities, such as Toby Ord's estimate of a 1-in-6 chance of existential catastrophe by 2100, encompassing risks from unaligned (1-in-10), engineered pandemics, and nuclear , as it renders optimistic extrapolations requiring near-zero failure rates empirically implausible without causal substantiation. DA critiques pervasive assumptions in some academic and policy discourses that posit inevitable technological progress toward a secure, expansive , demanding empirical validation for priors favoring billions more generations over the statistical expectation of near-term truncation. For instance, narratives emphasizing deterministic advancement through often overlook observer-selection effects highlighted by DA, which prioritize evidence-based adjustments to tail risks rather than unsubstantiated confidence in mitigation. This perspective counters overreliance on historical trends of risk decline without accounting for novel threats like synthetic biology or advanced AI, where institutional biases toward progressivism may undervalue doomsday priors. In policy terms, DA motivates targeted investments in survival-enhancing strategies, such as multi-planetary redundancy via , to potentially expand total population scales and evade doomsday implications, without inducing fatalistic inaction. Proponents argue for allocating resources—estimated at 1-2% of global GDP toward x-risk research and infrastructure—to elevate baseline survival odds from DA-inferred lows (e.g., a few percent for multi-millennial persistence) through diversified safeguards like asteroid deflection and AI governance. This approach emphasizes causal interventions grounded in verifiable risk factors over speculative utopianism, fostering resilience without exaggerated alarm.

Potential Influences on Policy and Future Planning

The Doomsday Argument contributes to a probabilistic framework that discourages complacency in long-term planning by highlighting the unlikelihood of humanity occupying a minuscule of its total potential population, thereby urging prioritization of existential risk mitigation over unchecked technological optimism. In circles, DA-related reasoning has amplified focus on tail-end catastrophe scenarios, influencing resource allocation toward interventions aimed at preserving future human generations rather than near-term . For example, Bayesian integrations of DA with longtermist ethics argue that apparent early positioning in human history elevates the of averting high-impact discontinuities, such as uncontrolled deployment. This cautionary stance draws empirical reinforcement from paleontological data, where the fossil record reveals that more than 99% of all species that have ever existed on have gone extinct, reflecting a pattern of finite persistence amid environmental and biological pressures rather than perpetual expansion. Such historical precedents challenge priors assuming indefinite survival, prompting policy considerations that embed conservative survival estimates in strategic forecasting, as explored in anthropic risk assessments at institutions like the former . In practice, DA's implications extend to advocating restrained advancement in high-stakes domains, such as and , where overconfidence in scalability could precipitate irreversible setbacks; it reframes planning not as predicting precise but as requiring affirmative evidence for multi-millennial continuity to justify aggressive expansionist policies. This evidentiary shift has informed broader discourses, emphasizing verifiable safeguards over speculative utopian projections.

Major Criticisms

Flaws in Reference Class Selection and

Critics of the doomsday argument highlight the arbitrariness in selecting the reference class, asserting that no principled criterion exists for defining it as all humans or "creatures like us," even though such choices drastically shift probabilistic outcomes. For instance, restricting the class to Homo sapiens alone yields a higher probability than broadening it to encompass potential posthumans or artificial observers, rendering the argument sensitive to unmotivated assumptions about observer similarity. This ambiguity extends to historical boundaries, where including prior hominid lineages such as Neanderthals—estimated to have numbered fewer than 100,000 individuals over their existence—would relegate modern humans to later ranks within a more extended sequence, diluting the inference of an imminent end. Similarly, projecting forward to vast numbers of future non-biological intelligences undermines the human-centric focus, as the class's scope becomes conjectural and expandable without limit, thereby eroding the argument's predictive force. The assumption of uniform random sampling from the reference class further falters due to inherent biases from non-stationary . Human births have surged exponentially, with roughly 108 billion total individuals estimated by 2011, the majority concentrated in recent eras following the from pre-industrial levels of about 1 billion in 1800 to over 8 billion today. This growth pattern overrepresents later birth orders among surviving or contemporary observers, akin to the selection bias in the , where ramping production causes early-captured units to underestimate total output under naive uniform assumptions. Consequently, the doomsday argument's inference of a small total confounds chronological position with causal production rates, treating observers as if drawn indifferently from a fixed pool while ignoring empirical asymmetries in birth distributions that naturally place current individuals toward the sequence's latter portion absent any doomsday event.

Conflicts with A Posteriori Empirical Evidence

The Doomsday Argument's inference of near-term clashes with empirical records of species and societal persistence over extended timescales. Homo sapiens originated approximately 300,000 years ago and has survived recurrent existential pressures, including climatic shifts, megafaunal overhunting, and bottlenecks like the Toba eruption circa 74,000 years ago, which may have reduced breeding populations to 3,000–10,000 individuals before demographic rebound. Similarly, crocodilians have endured for over 200 million years across five major mass extinctions, comprising 95% of geological history's events, demonstrating that biological lineages can maintain viability amid volatility without implying imminent collapse. These patterns indicate that longevity correlates with adaptive resilience rather than probabilistic doom within generations, privileging observed survival trajectories over the argument's compressed timeline. Empirical critiques directly test the argument's assumptions against historical data. Elliott Sober's analysis of Gott's Line variant, which posits a 95% probability of persistence between 5,100 and 7.8 million years based on 200,000 years elapsed, reveals disconfirmation through sampling process evaluations; historical evidence contradicts the uniform prior over temporal positions, as real-world distributions favor extended durations inconsistent with doomy predictions. Nonparametric predictive further supports this, treating past endurance—such as humanity's avoidance of extinction despite events like the (reducing Europe's population by 30–60% in the )—as Bayesian evidence for comparable future intervals, akin to Laplace's , which updates priors toward prolonged survival with accumulated non-extinction observations. Technological and demographic trends amplify this discord, as innovations have exponentially extended human viability counter to the argument's priors. Average global life expectancy surged from 31 years in 1900 to 73 years by 2023, driven by vaccines, antibiotics, and , while population expanded from 1.6 billion to 8 billion over the same period, averting Malthusian traps through yield-enhancing and energy abundance. models exponential growth under uniform priors on total scale, concluding that current levels (around 10^{10} individuals) position humanity as atypically early in expansive scenarios, yielding median future multipliers of 10^5 rather than within centuries; this aligns with observed mitigation of risks like nuclear arsenals (peaking at 70,000 warheads in 1986, now under 13,000 via treaties). Absent corroborating indicators—such as uncontrolled existential threats—these data prioritize causal mechanisms of progress over abstract sampling yielding improbable short horizons.

Issues with Prior Probability Distributions

Critics contend that the Doomsday Argument (DA) relies on an arbitrary or ill-defined distribution over the total human population NN, often assuming a distribution that renders Bayesian updates problematic. In the standard under the self-sampling assumption, the likelihood P(nN)=1/NP(n \mid N) = 1/N for observed birth rank nNn \leq N combined with an improper prior P(N)1P(N) \propto 1 for NnN \geq n yields a posterior P(Nn)1/NP(N \mid n) \propto 1/N, but the marginal P(n)=n(1/N)dNP(n) = \int_n^\infty (1/N) \, dN diverges logarithmically, akin to the tail of the series, making the normalization constant infinite and the update undefined. This impropriety implies that any finite nn carries no evidential weight, as the prior assigns effectively zero probability to observing any specific finite rank, undermining the argument's claim that nn informs expectations about NN. Even when proper priors are adopted to avoid divergence, the DA's conclusions prove sensitive to the choice of prior, often rendering the observed nn uninformative if the prior already favors substantially larger NN. For instance, priors informed by exponential models or cosmological expectations of long-lived civilizations—such as those anticipating vast future expansion—assign high probability mass to enormous NN, so the posterior shift from conditioning on current n1011n \approx 10^{11} remains minimal, predicting growth factors of 10510^5 or more rather than imminent . Economist argues that such "all else not equal" priors, derived from physical and economic reasoning, dominate the update, as the DA implicitly assumes a naive uniformity that ignores substantive knowledge about likely future scales. Scale-invariant priors, such as P(N)1/NP(N) \propto 1/N (a for positive scale parameters), address some impropriety but introduce infinite expectations in the posterior distribution for remaining population or total NN, as P(Nn)1/N2P(N \mid n) \propto 1/N^2 for NnN \geq n leads to divergent moments like E[Nn]=\mathbb{E}[N \mid n] = \infty. This pathology highlights the prior's vagueness: while intended to reflect ignorance, it paradoxically makes large NN overwhelmingly likely a posteriori without bound, neutralizing the doomsday prediction. Variants incorporating self-indication-like adjustments, which expand the to include non-existent observers, further overweight unrealistically large worlds to favor existence, but this exacerbates prior dependence without resolving the core arbitrariness.

Responses and Defenses

Rebuttals to Sampling and Reference Class Objections

Proponents of the Doomsday Argument (DA) maintain that the reference class of all human births—specifically Homo sapiens since approximately 300,000 years ago—constitutes the appropriate set for self-sampling, grounded in causal continuity from evolutionary lineage and shared cognitive capacities for reasoning. Alternatives, such as expanding the class to include Neanderthals or hypothetical future post-humans, demand substantive causal justification for equivalence in observer selection effects, which critics rarely provide beyond speculative assertions; without evidence of identical birth-order dynamics or existential trajectories, such inclusions violate parsimony in defining the class relevant to current human observers. John Leslie contends that diluting the class with disparate prehistoric populations undermines the argument's focus on our species' total extent, as these groups lacked the demographic ramp-up and technological context defining modern humanity's position. Objections invoking , particularly those positing exponential as explaining our apparent earliness in the sequence, presuppose a vast future to justify the ramp-up, rendering the critique circular and incompatible with the DA's prior-neutral stance on total numbers. argues that such defenses embed optimistic priors about longevity, which the self-sampling assumption (SSA) explicitly tests by treating one's birth rank as randomly drawn from the full class, thereby breaking the presupposition; empirical growth rates alone do not suffice without assuming the very large N the DA probabilistically disfavors. This rebuttal aligns with first-principles neutrality, as non-uniform sampling claims require independent verification of distribution shapes, which historical data—showing humanity's birth count at roughly 117 billion as of 2023—does not conclusively support for unbounded futures. The self-referential nature of the SSA extends to critics' positions: those dismissing the DA as biased must also regard themselves as typical early observers within the same class, subjecting their rebuttals to identical probabilistic constraints; rejecting the argument thus demands consistently applying alternative sampling models to one's own epistemic location, a step often omitted in critiques. Bostrom notes that metaphorical random sampling under SSA avoids literal urn-drawing pitfalls raised by Eckhardt, focusing instead on updating beliefs conditional on observed rank without requiring true , thereby preserving the argument's validity against purported non-i.i.d. births. This underscores that sampling objections, if valid, would equally undermine confidence in long-term survival priors held by detractors.

Addressing Empirical and Prior Probability Challenges

Defenders of the doomsday argument (DA) assert its compatibility with empirical data from the fossil record, where median mammalian extinction rates equate to species durations of approximately 555,000 years (derived from 1.8 extinctions per million species-years), aligning with Homo sapiens' roughly 300,000-year existence and suggesting that projections of vastly prolonged human persistence require evidence of atypical resilience not yet observed. Hominin lineages exhibit median temporal ranges of 620,000 to 970,000 years, further underscoring that the DA's implication of a limited remaining tenure fits historical patterns without presupposing human exceptionalism amid recurrent extinction events. A posteriori dismissals, such as humanity's survival to date, fail to negate the argument, as the DA functions as a Bayesian prior update on total population size that anticipates modest rather than negligible extinction risks, demanding robust counterevidence—like sustained risk mitigation or exponential expansions—to substantially revise posteriors. Challenges invoking strong prior probabilities for enormous total human numbers, such as indefinite demographic growth or galactic , are rebutted by noting their reliance on unsubstantiated causal mechanisms; John Leslie argues that neutral or vague priors, absent specific justifications for boundless futures, naturally yield the DA's probabilistic shift toward smaller total populations, avoiding the circularity of embedding optimism into assumptions that the argument itself tests. Imposing priors favoring near-infinite N effectively dismisses the observational selection effect central to the DA, privileging unverified optimism over the indifference principle that treats one's birth rank as randomly drawn from the full sequence. Even under the self-indication assumption (SIA), which amplifies probabilities for hypotheses postulating more observers and thereby weakens the DA relative to the self-sampling assumption (SSA), reasonable prior distributions—such as those incorporating physical limits on growth or finite resources—still impose constraints on extreme longevities, yielding non-trivial probabilities (e.g., on the order of 10-50% within centuries) for existential catastrophe rather than near-certainty of perpetuity. Proponents emphasize that SIA's favoritism for larger worlds does not equate to endorsing priors with unbounded N, as such would beg the question against the DA's core inference; balanced reasoning thus preserves a modest doom probability consistent with empirical assessments from fields like existential .

Logical and Self-Referential Counter-Counterarguments

Proponents of the Doomsday Argument (DA) contend that self-referential applications, such as applying the argument to the subset of individuals who accept or debate it, do not undermine its validity but instead reinforce its internal consistency. John Leslie notes that the DA adjusts priors for believers based on their rank within the relevant observer class, implying that the small current number of adherents is compatible with a truncated total human population, as a brief civilizational lifespan would limit future engagements with the concept. This meta-application avoids circular refutation because the argument's prediction of imminent extinction causally precludes a large cohort of later proponents, rendering an early positional sampling probable under the updated posterior distribution. The DA's logical structure further evades paradoxes by explicitly conditioning probabilities on the observer's existence, thereby sidestepping issues of non-existence or infinite classes that plague unconditioned inferences. Unlike self-sampling assumptions in hypotheses, which can generate inconsistencies by presuming equal likelihood across unverifiable simulated realities without causal anchoring, the DA relies on a finite, empirically grounded reference class of actual humans, ensuring Bayesian coherence without regress. Leslie argues this framework upholds targeting-truth principles, where the favors accurate predictions over random chance, as demonstrated in urn analogies where observer selection yields non-absurd shifts aligned with priors. Critics' attempts to invoke "doomsday on doomsday" meta-objections falter on causal realism, as the argument privileges simple, first-principles probabilistic reasoning over elaborate dismissals requiring assumptions about unknown futures or reference expansions. The robustness stems from its avoidance of self-undermining loops: of the DA does not alter the sampling mechanism but integrates into it, preserving evidential force without necessitating infinite observers or indeterministic resolutions to analogous paradoxes like the shooting-room setup. This simplicity underscores the DA's resilience, prioritizing direct from positional data over speculative counters that dilute observer selection effects.

Broader Reception and Philosophical Context

Acceptance Among Philosophers and Scientists

The Doomsday Argument has garnered divided opinions among philosophers and scientists since its formulation by physicist in a 1983 lecture, later published, with proponents highlighting its probabilistic challenge to assumptions of indefinite human expansion. Philosopher advanced the argument in books and papers from the 1980s onward, contending that one's random position in the sequence of all humans born implies a high likelihood of living near the middle of humanity's total , thus forecasting within centuries rather than millennia. Astrophysicist independently developed a similar "delta-t" principle in 1993, applying it to predict that structures like , observed after 1% of their existence, have only about 99 times their past duration remaining, extending this to human civilization's probable span of roughly 10,000 years total. Particle physicist Holger Bech Nielsen also endorsed a variant, linking it to selection in cosmic evolution. In rationalist circles, including discussions on platforms like during the 2010s, the argument has attracted minority support as a urging vigilance against existential risks, with some participants integrating it into x-risk analyses despite acknowledging its assumptions' vulnerabilities. These endorsements appeal to those skeptical of unchecked optimism in technological progress, positing the argument's self-sampling analogy as a first-principles check on priors favoring vast future populations. Skepticism predominates, however, with economist critiquing the argument in a 1998 analysis as failing to account for non-random observer selection effects and empirical trends in , rendering its doomsday prediction unpersuasive. Physicist rejected its core premise in 2007, arguing that human population follows deterministic or stochastic trajectories unfit for random uniform sampling, dismissing it as reliant on flawed probabilistic modeling. Other scientists, including over a dozen noted in meta-analyses, have sought to refute it via alternative principles like the Self-Indication Assumption, viewing the Doomsday Argument as an overreach of Bayesian updating without sufficient empirical grounding. Despite persistent debate spanning four decades, no consensus exists; formal surveys are scarce, but the argument's endurance reflects its logical intrigue for a subset wary of fallacies in long-term forecasting, even as mainstream views prioritize observable data over such a priori bounds. The doomsday argument (DA) intersects with reasoning frameworks such as the self-sampling assumption (SSA) and self-indication assumption (SIA), where SSA posits that an observer should reason as if randomly selected from the set of all actual observers, yielding DA's implication of a limited total and thus heightened risk. In contrast, SIA favors hypotheses with greater numbers of observers, countering DA by predicting vast future populations or expansions, but this leads to counterintuitive results like the "presumptuous philosopher" , where improbable theories of abundant observers dominate credences. Proponents of DA align it with SSA's emphasis on empirical observer sampling over SIA's bias toward unverified abundance, arguing SSA better reflects causal realism in finite reference classes without invoking speculative observer proliferation. This tension mirrors the Sleeping Beauty problem, where "halfer" positions (analogous to SSA) assign a 1/2 probability to heads upon awakening, akin to DA's restraint on optimistic priors, while "thirder" views (SIA-like) elevate credence in multiplied awakenings, paralleling SIA's doomsday aversion. DA's affinity for SSA-like halfer reasoning underscores its "doomy realism," prioritizing evidence from one's temporal position in the observer sequence over assumptions of hidden multiplicity, thereby avoiding SIA's vulnerability to overconfidence in low-probability expansions like rapid population booms. Compared to Nick Bostrom's , which infers high likelihood of ancestral simulations from advanced civilizations' potential to generate vast simulated observers, DA exhibits probabilistic and epistemic advantages by relying solely on verifiable human birth ranks rather than conjectural posthuman simulators. While the simulation argument accommodates doomsday via simulated rarity, it demands metaphysical commitments to ancestor-descendant simulations absent direct evidence, whereas DA derives its urgency from first-observer data, rendering it metaphysically leaner and less prone to unfalsifiable nesting. In relation to the and hypotheses, DA complements empirical risk assessments like the —set at 90 seconds to midnight as of January 2024—by providing a Bayesian prior that elevates near-term probabilities without alarmism, interpreting the Filter as likely future-oriented given humanity's apparent passage of prior barriers. Unlike Fermi solutions positing rare or interstellar barriers, DA's observer-centric logic reinforces a late Filter through probabilistic self-location, urging caution against assumptions of interstellar proliferation that SIA might inflate. DA's grounding in actual observer evidence grants it an edge over counters, which invoke infinite or branching realities to dilute doomsday probabilities but introduce untestable without empirical anchoring. Such alternatives, often SIA-aligned, prioritize theoretical plenitude over the causal sparsity implied by our singleton status in observed cosmic history, positioning DA as more parsimonious for finite-universe realism.

Persistent Debates and Unresolved Questions

Debates persist over the choice of distributions for total human population NN, with critics arguing that uniform priors over NN lead to counterintuitive results unless justified by causal models of , while alternatives like scale-invariant priors (e.g., ) yield milder predictions but lack consensus on applicability to anthropic selection. These tensions highlight the argument's sensitivity to unverified assumptions about future demographics, as empirical population data from 1950–2020 shows but no resolution on long-term trajectories. A key unresolved question concerns the doomsday argument's applicability in post-singularity scenarios, where artificial or could generate trillions of observer-moments, potentially invalidating human-centric reference classes and diluting probabilistic predictions of near-term extinction. Proponents contend that such expansions would still constrain total NN under self-sampling assumptions, but skeptics note that AI-driven futures introduce observers, complicating birth-rank calculations without clear empirical analogs as of 2025. Extensions incorporating , particularly the , remain contentious since the 2010s; the quantum doomsday argument posits that Everettian quantum mechanics implies a high probability of near-term to avoid unpalatable implications for observer proliferation across branches, yet this relies on unresolved self-locating uncertainty in branching universes. Critics in x-risk literature from the 2020s argue such extensions overextend reasoning, favoring causal risk assessments (e.g., failures at 10–20% probability by 2100) over probabilistic priors that risk conflating observation selection with extinction drivers. Empirical validation awaits longitudinal data on global peaks, projected by UN estimates to stabilize near 10.4 billion by 2080s before potential declines, offering a for doomsday predictions if totals exceed ~200 billion without catastrophe. However, the argument underscores epistemic caution against priors assuming indefinite expansion, as observations (no detected extraterrestrial civilizations despite of years of galactic habitability) suggest filters that may cap observer numbers without invoking infinite futures. These debates maintain the doomsday argument's open status, pending advances in formalisms and extinction-risk modeling.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.