What might we infer about optimized futures?

It is plausible to assume that technology will keep on advancing along various dimensions until it hits fundamental physical limits. We may refer to futures that involve such maxed-out technological development as “optimized futures”.

My aim in this post is to explore what we might be able to infer about optimized futures. Most of all, my aim is to advance this as an important question that is worth exploring further.


Contents

  1. Optimized futures: End-state technologies in key domains
  2. Why optimized futures are plausible
  3. Why optimized futures are worth exploring
  4. What can we say about optimized futures?
    1. Humanity may be close to (at least some) end-state technologies
    2. Optimized civilizations may be highly interested in near-optimized civilizations
    3. Strong technological convergence across civilizations?
    4. If technology stabilizes at an optimum, what might change?
    5. Information that says something about other optimized civilizations as an extremely coveted resource?
  5. Practical implications?
    1. Prioritizing values and institutions rather than pushing for technological progress?
    2. More research
  6. Conclusion
  7. Acknowledgments

Optimized futures: End-state technologies in key domains

The defining feature of optimized futures is that they entail end-state technologies that cannot be further improved in various key domains. Some examples of these domains include computing power, data storage, speed of travel, maneuverability, materials technology, precision manufacturing, and so on.

Of course, there may be significant tradeoffs between optimization across these respective domains. Likewise, there could be forms of “ultimate optimization” that are only feasible at an impractical cost — say, at extreme energy levels. Yet these complications are not crucial in this context. What I mean by “optimized futures” are futures that involve practically optimal technologies within key domains (such as those listed above).

Why optimized futures are plausible

There are both theoretical and empirical reasons to think that optimized futures are plausible (by which I here mean that they are at least somewhat probable — perhaps more than 10 percent likely).

Theoretically, if the future contains advanced goal-driven agents, we should generally expect those agents to want to achieve their goals in the most efficient ways possible. This in turn predicts continual progress toward ever more efficient technologies, at least as long as such progress is cost-effective.

Empirically, we have an extensive record of goal-oriented agents trying to improve their technology so as to better achieve their aims. Humanity has gone from having virtually no technology to creating a modern society surrounded by advanced technologies of various kinds. And even in our modern age of advanced technology, we still observe persistent incentives and trends toward further improvements in many domains of technology — toward better computers, robots, energy technology, and so on.

It is worth noting that the technological progress we have observed throughout human history has generally not been the product of some overarching collective plan that was deliberately aimed at technological progress. Instead, technological progress has in some sense been more robust than that, since even in the absence of any overarching plan, progress has happened as the result of ordinary demands and desires — for faster computers, faster and safer transportation, cheaper energy, etc.

This robustness is a further reason to think that optimized futures are plausible: even without any overarching plan aimed toward such a future, and even without any individual human necessarily wanting continued technological development leading to an optimized future, we might still be pulled in that direction all the same. And, of course, this point about plausibility applies to more than just humans: it applies to any set of agents who will be — or have been — structuring themselves in a sufficiently similar way so as to allow their everyday demands to push them toward continued technological development.

An objection against the plausibility of optimized futures is that there might be a lot of hidden potential for progress far beyond what our current understanding of physics seems to allow. However, such hidden potential would presumably be discovered eventually, and it seems probable that such hidden potential would likewise be exhausted at some point, even if it may happen later and at more extreme limits than we currently envision. That is, the broad claim that there will ultimately be some fundamental limits to technological development is not predicated on the more narrow claim that our current understanding of those limits is necessarily correct; the broader claim is robust to quite substantial extensions of currently envisioned limits. Indeed, the claim that there will be no fundamental limits to future technological development overall seems a stronger and less empirically grounded claim than does the claim that there will be such limits (cf. Lloyd, 2000; Krauss & Starkman, 2004).

Why optimized futures are worth exploring

The plausibility of optimized futures is one reason to explore them further, and arguably a sufficient reason in itself. Another reason is the scope of such futures: the futures that contain the largest numbers of sentient beings will most likely be optimized futures, suggesting that we have good reason to pay disproportionate attention to such futures, beyond what their degree of plausibility might suggest.

Optimized futures are also worth exploring given that they seem to be a likely point of convergence for many different kinds of technological civilizations. For example, an optimized future seems a plausible outcome of both human-controlled and AI-controlled Earth-originating civilizations, and it likewise seems a plausible outcome of advanced alien civilizations. Thus, a better understanding of optimized futures can potentially apply robustly to many different kinds of future scenarios.

An additional reason it is worth exploring optimized futures is that they overall seem quite neglected, especially given how plausible and consequential such futures appear to be. While some efforts have been made to clarify the physical limits of technology (see e.g. Sandberg, 1999; Lloyd, 2000; Krauss & Starkman, 2004), almost no work has been done on the likely trajectories and motives of civilizations with optimized technology, at least to my knowledge.

Lastly, the assumption of optimized technology is a rather strong constraint that might enable us to say quite a lot about futures that conform to that assumption, suggesting that this could be a fruitful perspective to adopt in our attempts to think about and predict the future.

What can we say about optimized futures?

The question of what we can say about optimized futures is a big one that deserves elaborate analysis. In this section, I will merely raise some preliminary points and speculative reflections.

Humanity may be close to (at least some) end-state technologies

One point that is worth highlighting is that a continuation of current rates of progress seems to imply that humanity could develop end-state technologies in information processing power within a few hundred years, perhaps 250 years at most (if current growth rates persist and assuming that our current understanding of the relevant physics is largely correct).

So at least in this important respect, and under the assumption of continued steady growth, humanity is surprisingly close to reaching an optimized future (cf. Lloyd, 2000).

Optimized civilizations may be highly interested in near-optimized civilizations

Such potential closeness to an optimized future could have significant implications in various ways. For example, if, hypothetically, there exists an older civilization that has already reached a state of optimized technology, any younger civilization that begins to approach optimized technologies within the same cosmic region would likely be of great interest to that older civilization.

One reason it might be of interest is that the optimized technologies of the younger civilization could potentially become competitive with the optimized technologies of the older civilization, and hence the older civilization may see a looming threat in the younger civilization’s advance toward such technologies. After all, since optimized technologies would represent a kind of upper bound of technological development, it is plausible that different instances of such technologies could be competitive with each other regardless of their origins.

Another reason the younger civilization might be of interest is that its trajectory could provide valuable information regarding the likely trajectories and goals of distant optimized civilizations that the older civilization may encounter in the future. (More on this point here.)

Taken together, these considerations suggest that if a given civilization is approaching optimized technology, and if there is an older civilization with optimized technology in its vicinity, this older civilization should take an increasing interest in this younger civilization so as to learn about it before the older civilization might have to permanently halt the development of the younger one.

Strong technological convergence across civilizations?

Another implication of optimized futures is that the technology of advanced civilizations across the universe might be remarkably convergent. Indeed, there are already many examples of convergent evolution in biology on Earth (e.g. eyes and large brains evolving several times independently). Likewise, many cases of convergence are found in cultural evolution in both early history (e.g. the independent emergence of farming, cities, and writing across the globe) as well as in recent history (e.g. independent discoveries in science and mathematics).

Yet the degree of convergence could well be even more pronounced in the case of the end-state technologies of advanced civilizations. After all, this is a case where highly advanced agents are bumping up against the same fundamental constraints, and the optimal engineering solutions in the face of these constraints will likely converge toward the same relatively narrow space of optimal designs — or at least toward the same narrow frontier of optimal designs given potential tradeoffs between different abilities.

In other words, the technologies of advanced civilizations might be far more similar and more firmly dictated by fundamental physical limits than we intuitively expect, especially given that we in our current world are used to seeing continually changing and improving technologies.

If technology stabilizes at an optimum, what might change?

The plausible convergence and stabilization of technological hardware also raises the interesting question of what, if anything, might change and vary in optimized futures.

This question can be understood in at least two distinct ways: what might change or vary across different optimized civilizations, and what might change over time within such civilizations? And note that prevalent change of the one kind need not imply prevalent change of the other kind. For example, it is conceivable that there might be great variation across civilizations, yet virtually no change in goals and values over time within civilizations (cf. “lock-in scenarios”).

Conversely, it is conceivable that goals and values change greatly over time within all optimized civilizations, yet such change could in principle still be convergent across civilizations, such that optimized civilizations tend to undergo roughly the same pattern of changes over time (though such convergence admittedly seems unlikely conditional on there being great changes over time in all optimized civilizations).

If we assume that technological hardware becomes roughly fixed, what might still change and vary — both over time and across different civilizations — includes the following (I am not claiming that this is an exhaustive list):

  • Space expansion: Civilizations might expand into space so as to acquire more resources; and civilizations may differ greatly in terms of how much space they manage to acquire.
  • More or different information: Knowledge may improve or differ over time and space; even if fundamental physics gets solved fairly quickly, there could still be knowledge to gain about, for example, how other civilizations tend to develop.
    • There would presumably also be optimization for information that is useful and actionable. After all, even a technologically optimized probe would still have limited memory, and hence there would be a need to fill this memory with the most relevant information given its tasks and storage capacity.
  • Different algorithms: The way in which information is structured, distributed, and processed might evolve and vary over time and across civilizations (though it is also conceivable that algorithms will ultimately converge toward a relatively narrow space of optima).
  • Different goals and values: As mentioned above, goals and values might change and vary, such as due to internal or external competition, or (perhaps less likely) through processes of reflection.

In other words, even if everyone has — or is — practically the same “iPhone End-State”, what is running on these iPhone End-States, and how many of them there are, may still vary greatly, both across civilizations and over time. And these distinct dimensions of variation could well become the main focus of optimized civilizations, plausibly becoming the main dimensions on which civilizations seek to develop and compete.

Note also that there may be conflicts between improvements along these respective dimensions. For example, perhaps the most aggressive forms of space expansion could undermine the goal of gaining useful information about how other civilizations tend to develop, and hence advanced civilizations might avoid or delay aggressive expansion if the information in question would be sufficiently valuable (cf. the “info gain motive”). Or perhaps aggressive expansion would pose serious risks at the level of a civilization’s internal coordination and control, thereby risking a drift in goals and values.

In general, it seems worth trying to understand what might be the most coveted resources and the most prioritized domains of development for civilizations with optimized technology. 

Information that says something about other optimized civilizations as an extremely coveted resource?

As hinted above, one of the key objectives of a civilization with optimized technology might be to learn, directly or indirectly, about other civilizations that it could encounter in the future. After all, if a civilization manages to both gain control of optimized technology and avoid destructive internal conflicts, the greatest threat to its apex status over time will likely be other civilizations with optimized technology. More generally, the main determinant of an optimized civilization’s success in achieving its goals — whether it can maintain an unrivaled apex status or not — could well be its ability to predict and interact gainfully with other optimized civilizations.

Thus, the most precious resource for any civilization with optimized technology might be information that can prepare this civilization for better exchanges with other optimized agents, whether those exchanges end up being cooperative, competitive, or outright aggressive. In particular, since the technology of optimized civilizations is likely to be highly convergent, the most interesting features to understand about other civilizations might be what kinds of institutions, values, decision procedures, and so on they end up adopting — the kinds of features that seem more contingent.

But again, I should stress that I mention these possibilities as speculative conjectures that seem worth exploring, not as confident predictions.

Practical implications?

In this section, I will briefly speculate on the implications of the prospect of optimized futures. Specifically, what might this prospect imply in terms of how we can best influence the future?

Prioritizing values and institutions rather than pushing for technological progress?

One implication is that there may be limited long-term payoffs in pushing for better technology per se, and that it might make more sense to prioritize the improvement of other factors, such as values and institutions. That is, if the future is in any case likely to be headed toward some technological optimum, and if the values and institutions (etc.) that will run this optimal technology are more contingent and “up for grabs”, then it arguably makes sense to prioritize those more contingent aspects.

To be clear, this is not to say that values and institutions will not also be subject to significant optimization pressures that push them in certain directions, but these pressures will plausibly still be weaker by comparison. After all, a wide range of values will imply a convergent incentive to create optimized technology, yet optimized technology seems compatible with a wide range of values and institutions. And it is not clear that there is a similarly strong pull toward some “optimized” set of values or institutions given optimized technology.

This perspective is arguably also supported by recent history. For example, we have seen technology improve greatly, with computing power heading in a clear upward direction over the past decades. Yet if we look at our values and institutions, it is much less clear whether they have moved in any particular direction over time, let alone an upward direction. Our values and institutions seem to have faced much less of a directional pressure compared to our technology.

More research

Perhaps one of the best things we can do to make better decisions with respect to optimized futures is to do research on such futures. The following are some broad questions that might be worth exploring:

  • What are the likely features and trajectories of optimized futures?
    • Are optimized futures likely to involve conflicts between different optimized civilizations?
    • Other things being equal, is a smaller or a larger number of optimized civilizations generally better for reducing risks of large-scale conflicts?
    • More broadly, is a smaller or larger number of optimized civilizations better for reducing future suffering?
  • What might the likely features and trajectories of optimized futures imply in terms of how we can best influence the future?
  • Are there some values or cooperation mechanisms that would be particularly beneficial to instill in optimized technology?
    • If so, what might they be, and how can we best work to ensure their (eventual) implementation?

Conclusion

The future might in some ways be more predictable than we imagine. I am not claiming to have drawn any clear or significant conclusions about how optimized futures are likely to unfold; I have mostly aired various conjectures. But I do think the question is valuable, and that it may provide a helpful lens for exploring how we can best impact the future.

Acknowledgments

Thanks to Tobias Baumann for helpful comments.

Suffering, Infinity, and Universe Anti-Natalism

Questions that concern infinite amounts of value seem worth spending some time contemplating, even if those questions are of a highly speculative nature. For instance, if we assume a general expected value framework of a kind where we evaluate the expected value of a given outcome based on its probability multiplied by its value, then any more than an infinitesimal probability of an outcome that has infinite value would imply that this outcome has infinite expected value. And hence that the expected value of such an outcome would trump that of any outcome with a “mere” finite amount of value.

Therefore, on this framework, even strongly convinced finitists are not exempt from taking seriously the possibility that infinities, of one ethically relevant kind or another, may be real. For however strong a conviction one may hold, maintaining only an infinitesimal probability that infinite value outcomes of some sort could be real seems difficult to defend.

Bounding the Influence of Expected Value Thinking

It is worth making clear, as a preliminary note, that we may reasonably put a bound on how much weight we give such an expected value framework in our ethical deliberations, so as to avoid crazy conclusions and actions; or simply to preserve our sanity, which may also be a priority for some.

In fact, it is easy to point to good reasons for why we should constrain the influence of such a framework on our decisions. For although it seems implausible to entirely reject such an expected value framework in one’s moral reasoning, it would seem equally implausible to consider such a framework complete and exhaustive in itself. One reason being that thinking in terms of expected value is just one way to theorize about the world among many others, and it seems difficult to justify granting it a particularly privileged status among these, especially given a tool-like conception of our thinking: if all our thinking about the world is best thought of as a tool that helps us navigate in the world rather than a set of Platonic ideals that perfectly track truths in a transcendent way, it seems difficult to elevate a single class of these tools, such as thinking in terms of expected value, to a higher status than all others. But also given that we cannot readily put numbers on most things in practice, both due to a lack of time in most real-world situations and because, even when we do have time, the numbers we assign are often bound to be entirely speculative, if at all meaningful in the first place.

Just as we need more than theoretical physics to navigate in the physical world, it seems likely that we will do well to not only rely on an expected value framework to navigate the moral landscape, and this holds true even if all we care about is to maximize or minimize the realization of a certain class of states. Using only a single style of thinking makes us inherently vulnerable to mistakes in our judgments, and hence resting everything on one style of thinking without limits seems risky and unwise.

It therefore seems reasonable to limit the influence of this framework, and indeed any single framework, and one proposed way of doing so is by giving it only a limited number of the seats of one’s notional moral parliament; say, 40 percent of them. In this way, we should be better able to avoid the vulnerabilities of relying on a single framework, while remaining open to being guided by its inputs.

What Can Be the Case?

To get an overview, let us begin by briefly surveying (at least some of) the landscape of the conceivable possibilities concerning the size of the universe. Or, more precisely, the conceivable possibilities concerning the axiological size of the universe. For it is indeed possible, at least abstractly, for the universe to be physically finite, yet axiologically infinite; for instance, if some states of suffering are infinitely disvaluable, then a universe containing one or more of such states would be axiologically infinite, even if physically finite.

In fact, a finite universe containing such states could be worse, indeed infinitely worse, than even a physically infinite universe containing an infinite amount of suffering, if the states of suffering realized in the finite universe are more disvaluable than the infinitely many states of suffering found in the physically infinite universe. (I myself find the underlying axiological claim here more than plausible: that a single instance of certain states of suffering — torture, say — are more disvaluable than infinitely many instances of milder states of suffering, such as pinpricks.)

It is also conceivable that the universe is physically infinite, yet axiologically finite; if, for instance, our axiology is non-additive, if the universe contains only infinitesimal value throughout, or if only a freak bubble of it contains entities of value. This last option may seem impossibly unlikely, yet it is conceivable. Infinity does not imply infinite repetition; the infinite sequence ( 1, 0, 0, 0, … ) does not logically have to contain 1 again, and indeed doesn’t.

In terms of physical size, there are various ways in which infinity can be realized. For instance, the universe may be both temporally and spatially infinite in terms of its extension. Or it may be temporally bounded while spatially infinite in extension, or vice versa: be spatially finite, yet eternal. It should be noted, though, that these two may be considered equivalent, if we view only points in space and time as having value-bearing potential (arguably the only view consistent with physicalism, ultimately), and view space and time as a four-dimensional structure. Then one of these two universes will have infinite “length” and finite “breadth”, while the opposite is true of the other one, and a similar shape can thus be obtained via “90 degree” rotation.

Similarly, it is also conceivable (and perhaps plausible) that the universe has a finite past and an infinite future, in which case it will always have a finite age, or it could have an infinite past and a finite future. Or, equivalently in spatial terms, be bounded in one spatial direction, yet have infinite extension in another.

Yet infinite extension is not the only conceivable way in which physical infinity may conceivably be realized. Indeed, a bounded space can, at least in one sense, contain more elements than an unbounded one, as exemplified by the cardinality of the real numbers in the interval (0, 1) compared to all the natural numbers. So not only might the universe be infinite in terms of extension, but also in terms of its divisibility — i.e. in terms of notional sub-worlds we may encounter as we “zoom down” at smaller scales — which could have far greater significance than infinite extension, at least if we believe we can use cardinality as a meaningful measure of size in concrete reality.

Taking this possibility into consideration as well, we get even more possible combinations — infinitely many, in fact. For example, we can conceive of a universe that is bounded both spatially and temporally, yet which is infinitely divisible. And it can then be infinitely divisible in infinitely many different ways. For instance, it may be divisible in such a way that it has the same cardinality as the natural numbers, i.e. its set of “sub-worlds” is countably infinite, or it could be divisible with the same cardinality as the real numbers, meaning that it consists of uncountably many “sub-worlds”. And given that there is no largest cardinality, we could continue like this ad infinitum.

One way we could try to imagine the notional place of such small worlds in our physical world is by conceiving of them as in some sense existing “below” the Planck scale, each with their own Planck scale below which even more worlds exist, ad infinitum. Many more interesting examples of different kinds of combinations of the possibilities reviewed so far could be mentioned.

Another conceivable, yet supremely speculative, possibility worth contemplating is that the size of the universe is not set in stone, and that it may be up to us/the universe itself to determine whether it will be infinite, and what “kind” of infinity.

Lastly, it is also conceivable that the size of the universe, both in physical and axiological terms, cannot faithfully be conceived of with any concept available to us. So although the conceivable possibilities are infinite, it remains conceivable that none of them are “right” in any meaningful sense.

What Is the Case? — Infinite Uncertainty?

Unfortunately, we do not know whether the universe is infinite or not; or, more generally, which of the possibilities mentioned above that are true of our condition. And there are reasons to think that we will never know with great confidence. For even if we were to somehow encounter a boundary encapsulating our universe, or otherwise find strong reasons for believing in one, how could we possibly exclude that there might not be something beyond that boundary? (Not to mention that the universe might still be infinitely divisible even if bounded.) Or, alternatively, even if we thought we had good reasons to believe that our universe is infinite, how can we be sure that the limited data we base that conclusion on can be generalized to locations arbitrarily far away from us? (This is essentially the problem of induction.)

Yet even if we thought we did know whether the universe is infinite with great confidence, the situation would arguably not be much different. For if we accept the proposition that we should have more than infinitesimal credence in any empirical claim about the world, what is known as Cromwell’s rule (I have argued that this applies to all claims, not just [stereotypically] “empirical” claims), then, on our general expected value framework, it would seem that any claim about the reality of infinite value outcomes should always be taken seriously, regardless of our specific credences in specific physical and axiological models of the universe.

In fact, not only should the conceivable realizations of infinity reviewed above be taken seriously (at least to the extent that they imply outcomes with infinite (dis)value), but so should a seemingly even more outrageous notion, namely that infinite (dis)value may be at stake in any given action we take. However small a non-zero real-valued probability we assign such a claim — e.g. that the way you prepare your coffee tomorrow morning is going to impact an infinite amount of (dis)value — the expected value of getting the, indeed any, given action right remains infinite.

How should we act in light of this outrageous possibility?

Pascallian and Counter-Pascallian Claims

The problem, or perhaps our good fortune, is that, in most cases arguably, we do not seem to have reason to believe that one course of action is more likely to have an infinitely better outcome than another. For example, in the case of the morning coffee, we appear to have no more reason to believe that, say, making a strong cup of coffee will lead to infinitely more disvalue than making a mild one will, rather than it being the other way around. For such hypotheses, we seem able to construct an equal and oppositely directed counter-hypothesis.

Yet even if we concede that this is the case most of the time, what about situations where this is not the case? What about choices where we do have slightly better reasons to believe that one outcome will be infinitely better than another one?

This is difficult to address in the absence of any concrete hypotheses or scenarios, so I shall here consider the two specific cases, or classes of scenarios, where a plausible reason may be given in favor of thinking that one course of action will influence infinitely more value than another. One is the case of an eternal civilization: our actions may impact infinite (dis)value by impacting whether, and in what form, an eternal civilization will exist in our universe.

In relation to the (extremely unlikely) prospect of the existence of such a civilization, it seems that we could well find reasons to believe that we can impact an infinite amount of value. But the crucial question is: how? From the perspective of negative utilitarianism, it is far from clear what outcomes are most likely to be infinitely better than others. This is especially true in light of the other class of ways in which we may plausibly impact infinite value that I shall consider here, namely by impacting the creation of, or the unfolding of events in, parallel universes, which may eventually be infinitely numerous.

For not only could an eternal civilization that is the descendant of ours be better in “our universe” than another eternal civilization that may emerge in our place if we go extinct; it could also be better with respect to its effects on the creation of parallel universes, in which case it may be best for negative utilitarians to work to preserve our civilization, contrary to what is commonly considered the ultimate corollary of negative utilitarianism. Indeed, this could be the case even if no other civilization were to emerge instead of ours: if the impact our civilization will have on other universes results in less suffering than what would otherwise be created naturally. It is, of course, also likely that the opposite is the case: that the continuation of our civilization would be worse than another civilization or no civilization.

So in these cases where reasons pointing more in one way than another plausibly could be found, it is not clear which direction that would be. Except perhaps in the direction that we should do more research on this question: which actions are more likely to reduce infinitely more suffering than others? Indeed, from the point of view of a suffering-focused expected value framework, it would seem that this should be our highest priority.

Ignoring Small Credences?

In his paper on infinite ethics, Nick Bostrom argues that it is extraordinarily unlikely that we would end up with perfectly balanced credences when one choice might have infinitely better consequences than another:

This cancellation of probabilities would have to be perfectly accurate, down to the nineteenth decimal place and beyond. […]

It would seem almost miraculous if these motley factors, which could be subjectively correlated with infinite outcomes, always managed to conspire to cancel each other out without remainder. Yet if there is a remainder—if the balance of epistemic probability happens to tip ever so slightly in one direction—then the problem of fanaticism remains with undiminished force. Worse, its force might even be increased in this situation, for if what tilts the balance in favor of a seemingly fanatical course of action is the merest hunch rather than any solid conviction, then it is so much more counterintuitive to claim that we ought to pursue it in spite of any finite sacrifice doing so may entail. The “exact-cancellation” argument threatens to backfire catastrophically.

I do not happen to share Bostrom’s view, however. Apart from the aforementioned bounding of the influence of expected value thinking, there are also ways to avoid such apparent craziness of letting our actions rest on the slightest hunch from within the expected value framework: disregarding sufficiently low credences.

Bostrom is skeptical of this approach:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria. The postulation of an exception from these criteria for very low-likelihood events is, at the very least, theoretically ugly.

Yet Bostrom here seems to ignore that “the value in question” is infinite for every action, cf. the point that we should maintain some non-zero credence in any empirical claim, including the claim that any given action may effect an infinite amount of (dis)value.

So no action we can point toward is fundamentally different from any other in this respect. The only difference is just whether one action might be more likely to be infinitely better compared to any other action. And when it comes to such credences, I would argue that it is reasonable to ignore sufficiently small probabilities.

First, one could argue that, just as most models of physics break down beyond a certain range, it is reasonable to expect that our ability to discriminate between different credence levels breaks down when we reach a sufficiently fine scale. This is also well in line with the fact that it is generally difficult to put precise numbers on our credence levels with respect to specific claims. Thus, one could argue that we are typically way past the range of error of our intuitive credences when we reach the nineteenth decimal place.

This conclusion can also be reached via a rather different consideration: one can argue that our entire ontological and epistemological framework itself cannot be assumed credible with absolute certainty. Therefore, it would seem that our entire worldview, including this framework of assigning numerical values, or indeed any order at all, to our credences, should itself be assigned some credence of being wrong. And one can then argue, quite reasonably, that once we reach a level of credence in any claim that is lower than our level of credence in, say, the meaningfulness of ascribing credences in this way in the first place, this specific credence should generally be ignored, as it lies beyond what we consider the range of reliability of this framework in the first place.

In sum, I think it is fair to say that, when we only have a tiny credence that some action may be infinitely better than another, we should do more research and look for better reasons to act on rather than to act on these hunches. We can reasonably ignore exceptionally small credences in practice, as we already do every time we make a decision based on calculations of finite expected values — we then ignore the tiny credence we should have that the value of the outcomes in question is infinite.

Infinitarian Paralysis?

Another thing Bostrom treats in his paper is whether the existence of infinite value implies, on aggregative consequentialist views, that it makes no difference what we do. As he puts it:

Aggregative consequentialist theories are threatened by infinitarian paralysis: they seem to imply that if the world is canonically infinite then it is always ethically indifferent what we do. In particular, they would imply that it is ethically indifferent whether we cause another holocaust or prevent one from occurring. If any non-contradictory normative implication is a reductio ad absurdum, this one is.

To elaborate a bit: the reason it is supposed to be indifferent whether we cause another holocaust is that the net sum of value in the universe supposedly is the same either way: infinite.

It should be noted, though, that whether this really is a problem depends on how we define and calculate the “sum of value”. And the question is then whether we can define this in a meaningful way that avoids absurdities and provides us with a useful ethical framework we can act on.

A potential solution to this conundrum is to give up our attachment to cardinal arithmetic. In a way, this is obvious: if you have an infinite set and add finitely many elements to it, you still have “the same as before”, in terms of the cardinality of the set. Yet, in another sense, we of course do not get “the same as before”, in that the new infinite set is not identical to the one we had before. Therefore, if we insist that adding another holocaust to a universe that already contains infinitely many holocausts should make a difference, we are simply forced to abandon standard cardinal arithmetic. In its stead, we should arguably just take our requirement as an axiom: that adding any amount of value to an infinity of value does make a difference — that it does change the “sum of value”.

This may seem simplistic, and one may reasonably ask how this “sum of value” could be defined. A simple answer is that we could add up whatever (presumably) finite difference we make within the larger (hypothetically) infinite world, and to then consider that the relevant sum of value that should determine our actions, what has been referred to as “the causal approach” to this problem.

This approach has been met with various criticisms, one of them being that it leaves “the total sum of value” unchanged. As Bostrom puts it:

One consequence of the causal approach is that there are cases in which you ought to do something, and ought to not do something else, even though you are certain that neither action would have any effect at all on the total value of the world.

Yet it is worth noting that “the total value of the world” is not left unchanged on every definition of these terms; it just is on one particular definition, one that we arguably have good reason to consider implausible, since it implies that adding another holocaust makes no difference to the “total value of the world”. If we can help alleviate the extreme suffering of just a single being, while keeping all else equal, this being will hardly agree that “the total value of the world” was left unchanged by our actions, at least in the most plausible sense.

Imagine by analogy a hypothetical Earth identical to ours, with the two exceptions that 1) it has been inhabited by humans for an eternal and unalterable past, over which infinitely many holocausts have taken place, and 2) it has a finite future; the universe it inhabits will end peacefully in a hundred years. Now, if people on this Earth held an ethical theory that does not take its unalterable infinite past into account, and instead focuses on the finite future, including preventing holocausts from happening in their future, would this count against that theory in any way? I fail to see how it could, and yet this is essentially the same as taking the causal approach within an infinite universe, only phrased in purely temporal rather than spatio-temporal terms.

Another criticism that has been leveled against the causal approach is that we cannot rule out that our causal impact may in some sense be infinite, and therefore it is problematic to say that we should just measure the world’s value, and take action based on, whatever finite difference we make. Here is Bostrom again:

When a finite positive probability is assigned to scenarios in which it is possible for us to exert a causal effect on an infinite number of value-bearing locations […] then the expectation value of the causal changes that we can make is undefined. Paralysis will thus strike even when the domain of aggregation is restricted to our causal sphere of influence.

Yet these claims actually do not follow. First, it should again be noted that the situation Bostrom refers to here is in fact the situation we are always in: we should always assign a positive probability to the possibility that we may effect infinite (dis)value. Second, we should be clear that the scenario where we can impact an infinite amount of value, and where we aggregate over the realm we can influence, is fundamentally different from the scenario in which we aggregate over an infinite universe that contains an infinite amount of value that we cannot impact. To the extent there are threats of “infinitarian paralysis” in these two scenarios, they are not identical.

For example, Bostrom’s claim that “the expectation value of the causal changes that we can make is undefined” need not be true even on standard cardinal arithmetic, at least in the abstract (i.e. if we ignore Cromwell’s rule), in the scenario where we focus only on our own future light cone. For it could be that the scenarios in which we can “exert a causal effect on an infinite number of value-bearing locations” were all scenarios that nonetheless contained only finite (dis)value, or, on a dipolar axiology, only a finite amount of disvalue and an infinite amount of value. A concrete example of the latter could be a scenario where the abolitionist project outlined by David Pearce is completed in an eternal civilization after a finite amount of time.

Hence, it is not necessarily the case that “paralysis will strike even when the domain of aggregation is restricted to our causal sphere of influence”, apart from in the sense treated earlier, when we factor in Cromwell’s rule: how should we act given that all actions may effect infinite (dis)value? But again, this is a very different kind of “paralysis” than the one that appears to be Bostrom’s primary concern, cf. this excerpt from the abstract of his paper Infinite Ethics:

Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity.

Indeed, one can argue that the “Cromwell paralysis” in a sense negates this latter paralysis, as it implies that it may not be true that we can affect only a finite amount of good or bad, and, more generally, that we should assign a non-zero probability to the claim that we can optimize the value of the universe everywhere throughout, including in those corners that seem theoretically inaccessible.

Adding Always Makes a Difference

As for the infinitarian paralysis supposed to threaten the causal approach in the absence of the “Cromwell paralysis” — how to compare the outcomes we can impact that contain infinite amounts of value? — it seems that we can readily identify reasonable consequentialist principles to act by that should at least allow us to compare some actions and outcomes against each other, including, perhaps, the most relevant ones.

One such principle is the one alluded to in the previous section: that adding something of (dis)value always makes a difference, even if the notional set we are adding it to contains infinitely many similar elements already. In terms of an axiology that holds the amount of suffering in the world to be the chief measure of value, this principle would hold that adding/failing to prevent an instance of suffering always makes for a less valuable outcome, provided that other things are equal, which they of course never quite are in the real world, yet they often are in expectation.

The following abstract example makes, I believe, a strong case for favoring such a measure of (dis)value over the cardinal sum of the units of (dis)value. As I formulate this thought experiment, this unit will, in accordance with my own view, be instances of intense suffering in the universe, yet the point applies generally:

Imagine that we have a universe with a countably infinite amount of instances of intense suffering. We may visualize this universe as a unit ball. Now imagine that we perform an act in this universe that leaves the original universe unchanged, yet creates a new universe identical to the first one. The result is a new universe full of suffering. Imagine next that we perform this same act in a world where nothing exists. The result is exactly the same: the creation of a new universe full of suffering, in the exact same amount. In both cases, we have added exactly the same ball of infinite suffering. Yet on standard cardinal arithmetic, the difference the act makes in terms of the sum of instances of suffering is not the same in the two cases. In the first case, the total sum is the same, namely countably infinite, while there is an infinite difference in the second case: from zero to infinity. If we only count the difference added, however— the “delta universe”, so to speak— the acts are equally disvaluable in the two cases. The latter method of evaluating the (dis)value of the act seems far more plausible than does evaluation based on the cardinal sum of the units of (dis)value in the universe. It is, after all, the exact same act.

This is not an idle thought experiment. As noted above, impacting the creation of new universes is one of the ways in which we may plausibly be able to influence an infinite amount of (dis)value, arguably even the most plausible one. Admittedly, it does rest on certain debatable assumptions about physics, yet these assumptions are arguably likely than is the possibility of the existence of an eternal civilization. For even disregarding specific civilization hostile facts about the universe (e.g. the end of stars and a rapid expansion of space that is thought to eventually rip ordinary matter apart), we should, for each year in the future, assign a probability strictly lower than 1 that civilization will go extinct that year, which means that the probability of extinction will be arbitrarily close to 1 within a finite amount of time.

In other words, an eternal civilization seems immensely unlikely, even if the universe were to stay perfectly life-friendly forever. The same does not seem true of the prospect of influencing the generation of new universes. As far as I can tell, the latter is in a ballpark of its own when it comes to plausible ways in which we may be able to effect infinite (dis)value, which is not to say that universe creation is more likely than not to become possible, but merely that it seems significantly more likely than other ways we know of in which we could effect infinite (dis)value (though, again, our knowledge of “such ways” is admittedly limited at this point, and something we should probably do more research on). Not only that, it is also something that could be relevant in the relatively near future, and more disvalue could depend on a single such near-future act of universe creation than what is found, intrinsically at least, in the entire future of our civilization. Infinitely more, in fact. Thus, one could argue that it is not our impact on the quality of life of future generations in our civilization that matters most in expectation, but our impact on the generation of universes by our civilization.

Universe Anti-Natalism: The Most Important Cause?

It is therefore not unthinkable that this should be the main question of concern for consequentialists: how does this impact the creation of new universes? Or, similarly, that trying to impact future universe generation should be the main cause for aspiring effective altruists. And I would argue that the form this cause should take is universe anti-natalism: avoiding, or minimizing, the creation of new universes.

There are countless ways to argue for this. As Brian Tomasik notes, creating a new universe that in turn gives rise to infinitely many universes “would cause infinitely many additional instances of the Holocaust, infinitely many acts of torture, and worse. Creating lab universes would be very bad according to several ethical views.”

Such universe creation would obviously be wrong from the stance of negative utilitarianism, as well as from similar suffering-focused views. It would also be wrong according to what is known as The Asymmetry in population ethics: that creating beings with bad lives is wrong, and something we have an obligation to not do, while failing to create happy lives is not wrong, and we have no obligation to bring such lives into being. A much weaker, and even less controversial, stance on procreative ethics could also be used: do not create lives with infinite amounts of torture.

Indeed, how, we must ask ourselves, could a benevolent being justify bringing so much suffering into being? What could possibly justify the Holocaust, let alone infinitely many of them? What would be our answer to the screams of “why” to the heavens from the torture victims?

Universe anti-natalism should also be taken seriously by classical utilitarians, as a case can be made that the universe is likely to end up being net negative in terms of algo-hedonic tone. For instance, it may well be that most sentient life that will ever exist will find itself in a state of natural carnage, as civilizations may be rare even on planets where sentient life has emerged, and because even where civilizations have emerged, it may be that they are unlikely to be sustainable, perhaps overwhelmingly so, implying that most sentient life might be expected to exist at the stage it has existed on for the entire history of sentient life on Earth. A stage where sentient beings are born in great numbers only for the vast majority of them to die shortly thereafter, for instance due to starvation or by being eaten alive, which is most likely a net negative condition, even by wishful classical utilitarian standards. Simon Knutsson’s essay How Could an Empty World Be Better than a Populated One? is worth reading in this context, and of course applies to “no world” as well.

And if one takes a so-called meta-normative approach, where one decides by averaging over various ethical theories, one could argue that the case against universe creation becomes significantly stronger; if one for instance combines an unclear or negative-leaning verdict from a classical utilitarian stance with The Asymmetry and Kantian ethics.

As for those who hold anti-natalism at the core of their values, one could argue that they should make universe anti-natalism their main focus over human anti-natalism (which may not even reduce suffering in expectation), or at the very least expand their focus to also encompass this apparently esoteric position. Not only because the scale is potentially unsurpassable in terms of what prevents the most births, but it may also be easier, both because wishful thinking about “those horrors will not befall my creation” could be more difficult to maintain in the face of horrors that we know have occurred in the past, and because we do not seem as attached and adapted, biologically and culturally, to creating new universes as we are to creating new children. And just as anti-natalists argue with respect to human life, being against the creation of new universes need not be incompatible with a responsible sustainment of life in the one that does exist. This might also be a compromise solution that many people would be able to agree on.

Are Other Things Equal?

The discussion above assumes that the generation of a new universe would leave all else equal, or at least leave all else merely “finitely altered”. But how can we be sure that the generation of a new universe would not in fact prevent the emergence of another? Or perhaps even prevent many infinite universes from emerging? We can’t. Yet we do not appear to have any reason for believing that this is the case. As noted above, all else will often be equal in expectation, and that also seems true in this case. We can make counter-Pascallian hypotheses in both directions, and in the absence of evidence for any of them, we appear to have most reason to believe that the creation of a new universe results, in the aggregate, in a net addition of a new universe. But this could of course be wrong.

For instance, artificial universe creation would be dwarfed by the natural universe generation that happens all the time according to inflationary models, so could it not be that the generation of a new universe might prevent some of these natural ones from occurring? I doubt that there are compelling reasons for believing this, but natural universe generation does raise the interesting question of whether we might be able to reduce the rate of this generation. Brian Tomasik has discussed the idea, yet it remains an open, and virtually unexplored, research question. One that could dominate all other considerations.

It may be objected that considerations of identical, or virtually identical, copies of ourselves throughout the universe have been omitted in this discussion, yet as far as I can tell, including such considerations would not change the discussion in a fundamental way. For if universe generation is the main cause and most consequential action to focus on for us, more important even than the intrinsic importance of the entire future of our civilization, then this presumably applies to each copy of ourselves as well. Yet I am curious to hear arguments that suggest otherwise.

A final miscellaneous point I should like to add here is that the points made above may apply even if the universe is, and only ever will be, finite, as the generation of a new finite pocket universe in that case still could bring about far more suffering than what is found in the future light cone of our own universe.

In conclusion, the subjects of the potential to effect infinite (dis)value in general, and of impacting universe generation in particular, are extremely neglected at this point, and a case can be made that more research into such possibilities should be a top priority. It seems conceivable that a question related to such a prospect — e.g. should we create more universes? — will one day be the main ethical question facing our civilization, perhaps even one we will be forced to decide upon in a not too distant future. Given the potentially enormous stakes, it seems worth being prepared for such scenarios — including knowing more about their nature, how likely they are, and how to best act in them — even if they are unlikely.

Ontological Possibilities and the Meaningfulness of Ethics

First written: Sep. 2017. Last update: Dec. 2025.

Are there different possible outcomes given the present state of the universe? One might think that much depends on our answer to this question. For example, if there are no alternative possible futures given the present state of the universe, one might think that ethics and efforts to improve the world would cease to make sense.

An Objection Against the Meaningfulness of Ethics

We can define “global ontological possibilities” as alternative possibilities that could result from the same state of the universe as a whole. Since alternative possibilities in a strong sense seem crucial to ethical deliberation, one might assume that global ontological possibilities are necessary for ethics to get off the ground, and indeed for engagement in ethical decision-making and action to make sense. On this assumption, one could argue that ethics does not make sense due to the non-existence of global ontological possibilities.

To be sure, the assumption that ethics requires global ontological possibilities is highly controversial. For example, one may hold that we can have genuine ontological possibilities at a relative level even if there are no global ontological possibilities, and hold that ethics is meaningful given such relative possibilities. Or one could maintain that purely epistemic or ex-ante possibilities are enough for ethics to make sense.

Yet my goal in this essay is not to question the assumption above. Instead, I will argue that even if one thinks global ontological possibilities are required for ethics to make sense, one cannot reasonably reject the meaningfulness of ethics based on the claim that such possibilities do not exist.

Key Premise: Humility Is Warranted

We do not know whether global ontological possibilities exist. Given our limited understanding of the fundamental nature of reality, it seems reasonable to maintain a degree of humility on this question. Indeed, even if we have reasons to believe that possibilities of this kind most likely do not exist, it still seems overconfident to assign more than, say, a 99.9 percent probability to their non-existence.

Note that the exact probability we assign to the potential existence of global ontological possibilities is not important. The point here is simply that, from our epistemic vantage point, there is a non-zero probability that global ontological possibilities exist.

Why the Objection Fails

The probabilistic premise above implies that it is unwarranted to reject ethics based on the supposed non-existence of global ontological possibilities.

To see why, consider the claim that risks of very bad future outcomes are low. Even if this claim were true, it would not follow that such risks can reasonably be dismissed. After all, when the stakes are sufficiently high, it is not reasonable to dismiss low probabilities. And when we are discussing the meaningfulness of ethics, the stakes could in some sense not be greater, since what is at issue is whether there are any stakes at all. Given such total stakes, even extremely low probabilities are worth taking seriously. Therefore, the mere epistemic possibility that global ontological possibilities are real is sufficient for undermining the above-mentioned objection against the meaningfulness of ethics.

Moreover, when considering the conceivable scenarios before us, an asymmetry emerges in support of the same conclusion. If global ontological possibilities are real, and if ethical action roughly amounts to realizing the best of these possibilities — or at least avoiding the worst — we seem to have good reason to try to realize the better over the worse of these possibilities. On the other hand, if such possibilities are not real, trying to create a better world appears to have no downside in terms of which global ontological possibilities end up getting realized. Thus, when considering these two horns, we seem to have a strong reason in favor of trying to create a better world, and no reason against it.

In sum, even if we grant the controversial premise that ethics requires global ontological possibilities, it does not follow that ethics is meaningless. Given our uncertainty about whether such possibilities exist, and given what is at stake, we have good reason to pursue ethical deliberation and action regardless.

Blog at WordPress.com.

Up ↑