“Altruism is a marathon, not a sprint”
— Attributed to Robert Wiblin.
Avoiding burnout should be a high priority for activists, both for their own sake and for the sake of those they advocate for. The following is a short list of resources that I have found useful in this regard myself, and which I have often shared with friends who share my predicament.
How Vegans Can Create Healthy Relationships and Communicate Effectively:
Guided Meditation for Activists:
It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard take-off scenarios compared to the attention paid to counter-arguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard take-off scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments; not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays.
I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least to overlook other things that may be more important to focus on.
I should note that I do not necessarily agree with all claims and arguments made in these various resources myself, yet I do think all the articles make at least some good points. I should also note that not all the following authors rule out the possibility of a hard take-off, as opposed to merely considering other scenarios more likely.
The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)
Yudkowsky vs Hanson — Singularity Debate (Youtube video)
Timothy B. Lee:
AI Risk Critiques: Index (links to many articles)
My own book on the subject:
Short summary and review of my book by Kaj Sotala:
Not directly about the subject, but still relevant books to read in my opinion:
Questions that concern infinite amounts of value seem worth spending some time contemplating, even if those questions are of a highly speculative nature. For instance, if we assume a general expected value framework of a kind where we evaluate the expected value of a given outcome based on its probability multiplied by its value, then any more than an infinitesimal probability of an outcome that has infinite value would imply that this outcome has infinite expected value. And hence that the expected value of such an outcome would trump that of any outcome with a “mere” finite amount of value.
Therefore, on this framework, even strongly convinced finitists are not exempt from taking seriously the possibility that infinities, of one ethically relevant kind or another, may be real. For however strong a conviction one may hold, maintaining only an infinitesimal probability that infinite value outcomes of some sort could be real seems difficult to defend.
Bounding the Influence of Expected Value Thinking
It is worth making clear, as a preliminary note, that we may reasonably put a bound on how much weight we give such an expected value framework in our ethical deliberations, so as to avoid crazy conclusions and actions; or simply to preserve our sanity, which may also be a priority for some.
In fact, it is easy to point to good reasons for why we should constrain the influence of such a framework on our decisions. For although it seems implausible to entirely reject such an expected value framework in one’s moral reasoning, it would seem equally implausible to consider such a framework complete and exhaustive in itself. One reason being that thinking in terms of expected value is just one way to theorize about the world among many others, and it seems difficult to justify granting it a particularly privileged status among these, especially given a tool-like conception of our thinking: if all our thinking about the world is best thought of as a tool that helps us navigate in the world rather than a set of Platonic ideals that perfectly track truths in a transcendent way, it seems difficult to elevate a single class of these tools, such as thinking in terms of expected value, to a higher status than all others. But also given that we cannot readily put numbers on most things in practice, both due to a lack of time in most real-world situations and because, even when we do have time, the numbers we assign are often bound to be entirely speculative, if at all meaningful in the first place.
Just as we need more than theoretical physics to navigate in the physical world, it seems likely that we will do well to not only rely on an expected value framework to navigate the moral landscape, and this holds true even if all we care about is to maximize or minimize the realization of a certain class of states. Using only a single style of thinking makes us inherently vulnerable to mistakes in our judgments, and hence resting everything on one style of thinking without limits seems risky and unwise.
It therefore seems reasonable to limit the influence of this framework, and indeed any single framework, and one proposed way of doing so is by giving it only a limited number of the seats of one’s notional moral parliament; say, 40 percent of them. In this way, we should be better able to avoid the vulnerabilities of relying on a single framework, while remaining open to be guided by its inputs.
What Can Be the Case?
To get an overview, let us begin by briefly surveying (at least some of) the landscape of the conceivable possibilities concerning the size of the universe. Or, more precisely, the conceivable possibilities concerning the axiological size of the universe. For it is indeed possible, at least abstractly, for the universe to be physically finite, yet axiologically infinite; for instance, if some states of suffering are infinitely disvaluable, then a universe containing one or more of such states would be axiologically infinite, even if physically finite.
In fact, a finite universe containing such states could be worse, indeed infinitely worse, than even a physically infinite universe containing an infinite amount of suffering, if the states of suffering realized in the finite universe are more disvaluable than the infinitely many states of suffering found in the physically infinite universe. (I myself find the underlying axiological claim here more than plausible: that a single instance of certain states of suffering — torture, say — are more disvaluable than infinitely many instances of milder states of suffering, such as pinpricks.)
It is also conceivable that the universe is physically infinite, yet axiologically finite; if, for instance, our axiology is non-additive, if the universe contains only infinitesimal value throughout, or if only a freak bubble of it contains entities of value. This last option may seem impossibly unlikely, yet it is conceivable. Infinity does not imply infinite repetition; the infinite sequence ( 1, 0, 0, 0, … ) does not logically have to contain 1 again, and indeed doesn’t.
In terms of physical size, there are various ways in which infinity can be realized. For instance, the universe may be both temporally and spatially infinite in terms of its extension. Or it may be temporally bounded while spatially infinite in extension, or vice versa: be spatially finite, yet eternal. It should be noted, though, that these two may be considered equivalent, if we view only points in space and time as having value-bearing potential (arguably the only view consistent with physicalism, ultimately), and view space and time as a four-dimensional structure. Then one of these two universes will have infinite “length” and finite “breadth”, while the opposite is true of the other one, and a similar shape can thus be obtained via “90 degree” rotation.
Similarly, it is also conceivable (and apparently plausible) that the universe has a finite past and an infinite future, in which case it will always have a finite age, or it could have an infinite past and a finite future. Or, equivalently in spatial terms, be bounded in one spatial direction, yet have infinite extension in another.
Yet infinite extension is not the only conceivable way in which physical infinity may conceivably be realized. Indeed, a bounded space can, at least in one sense, contain more elements than an unbounded one, as exemplified by the cardinality of the real numbers in the interval (0, 1) compared to all the natural numbers. So not only might the universe be infinite in terms of extension, but also in terms of its divisibility — i.e. in terms of notional sub-worlds we may encounter as we “zoom down” at smaller scales — which could have far greater significance than infinite extension, at least if we believe we can use cardinality as a meaningful measure of size in concrete reality.
Taking this possibility into consideration as well, we get even more possible combinations — infinitely many, in fact. For example, we can conceive of a universe that is bounded both spatially and temporally, yet which is infinitely divisible. And it can then be infinitely divisible in infinitely many different ways. For instance, it may be divisible in such a way that it has the same cardinality as the natural numbers, i.e. its set of “sub-worlds” is countably infinite, or it could be divisible with the same cardinality as the real numbers, meaning that it consists of uncountably many “sub-worlds”. And given that there is no largest cardinality, we could continue like this ad infinitum.
One way we could try to imagine the notional place of such small worlds in our physical world is by conceiving of them as in some sense existing “below” the Planck scale, each with their own Planck scale below which even more worlds exist, ad infinitum. Many more interesting examples of different kinds of combinations of the possibilities reviewed so far could be mentioned.
Another conceivable, yet supremely speculative, possibility worth contemplating is that the size of the universe is not set in stone, and that it may be up to us/the universe itself to determine whether it will be infinite, and what “kind” of infinity.
Lastly, it is also conceivable that the size of the universe, both in physical and axiological terms, cannot faithfully be conceived of with any concept available to us. So although the conceivable possibilities are infinite, it remains conceivable that none of them are “right” in any meaningful sense.
What Is the Case? — Infinite Uncertainty?
Unfortunately, we do not know whether the universe is infinite or not; or, more generally, which of the possibilities mentioned above that are true of our condition. And there are reasons to think that we will never know with great confidence. For even if we were to somehow encounter a boundary encapsulating our universe, or otherwise find strong reasons for believing in one, how could we possibly exclude that there might not be something beyond that boundary? (Not to mention that the universe might still be infinitely divisible even if bounded.) Or, alternatively, even if we thought we had good reasons to believe that our universe is infinite, how can we be sure that the limited data we base that conclusion on can be generalized to locations arbitrarily far away from us? (This is essentially the problem of induction.)
Yet even if we thought we did know whether the universe is infinite with great confidence, the situation would arguably not be much different. For if we accept the proposition that we should have more than infinitesimal credence in any empirical claim about the world, what is known as Cromwell’s rule (I have argued that this applies to all claims, not just [stereotypically] “empirical” claims), then, on our general expected value framework, it would seem that any claim about the reality of infinite value outcomes should always be taken seriously, regardless of our specific credences in specific physical and axiological models of the universe.
In fact, not only should the conceivable realizations of infinity reviewed above be taken seriously (at least to the extent that they imply outcomes with infinite (dis)value), but so should a seemingly even more outrageous notion, namely that infinite (dis)value may rest on any particular action we do. However small a non-zero real-valued probability we assign such a claim — e.g. that the way you prepare your coffee tomorrow morning is going to impact an infinite amount of value — the expected value of getting the, indeed any, given action right remains infinite.
How should we act in light of this outrageous possibility?
Pascallian and Counter-Pascallian Claims
The problem, or perhaps our good fortune, is that, in most cases arguably, we do not seem to have reason to believe that one course of action is more likely to have an infinitely better outcome than another. For example, in the case of the morning coffee, we appear to have no more reason to believe that, say, making a strong cup of coffee will lead to infinitely more disvalue than making a mild one will, rather than it being the other way around. For such hypotheses, we seem able to construct an equal and oppositely directed counter-hypothesis.
Yet even if we concede that this is the case most of the time, what about situations where this is not the case? What about choices where we do have slightly better reasons to believe that one outcome will be infinitely better than another one?
This is difficult to address in the absence of any concrete hypotheses or scenarios, so I shall here consider the two specific cases, or classes of scenarios, where a plausible reason may be given in favor of thinking that one course of action will influence infinitely more value than another. One is the case of an eternal civilization: our actions may impact infinite (dis)value by impacting whether, and in what form, an eternal civilization will exist in our universe.
In relation to the (extremely unlikely) prospect of the existence of such a civilization, it seems that we could well find reasons to believe that we can impact an infinite amount of value. But the crucial question is: how? From the perspective of negative utilitarianism, it is far from clear what outcomes are most likely to be infinitely better than others. This is especially true in light of the other class of ways in which we may plausibly impact infinite value that I shall consider here, namely by impacting the creation of, or the unfolding of events in, parallel universes, which may eventually be infinitely numerous.
For not only could an eternal civilization that is the descendant of ours be better in “our universe” than another eternal civilization that may emerge in our place if we go extinct; it could also be better with respect to its effects on the creation of parallel universes, in which case it may be normative for negative utilitarians to work to preserve our civilization, contrary to what is commonly considered the ultimate corollary of negative utilitarianism (and this could also hold true if the temporal extension of our civilization is bound to be finite). Indeed, this could be the case even if no other civilization were to emerge instead of ours: if the impact our civilization will have on other universes results in less suffering than what would otherwise be created naturally. It is, of course, also likely that the opposite is the case: that the continuation of our civilization would be worse than another civilization or no civilization. And I must admit that I have no idea what is more likely to be the case.
So in these cases where reasons pointing more in one way than another plausibly could be found, it is not clear which direction that would be. Except perhaps in the direction that we should do more research on this question: which actions are more likely to reduce infinitely more suffering than others? Indeed, from the point of view of a suffering-focused expected value framework, it would seem that this should be our highest priority.
Ignoring Small Credences?
One may be skeptical of my claim above: can it really be true that the considerations, or at least my considerations, in the case of the continuation of civilization cancel out exactly? Is there not even the smallest difference? Not even a hunch?
In his paper on infinite ethics, Nick Bostrom argues that such an exact cancellation seems extraordinarily unlikely, and that small tips in balance seem to have counter-intuitive, if not catastrophic, consequences:
This cancellation of probabilities would have to be perfectly accurate, down to the nineteenth decimal place and beyond. […]
It would seem almost miraculous if these motley factors, which could be subjectively correlated with infinite outcomes, always managed to conspire to cancel each other out without remainder. Yet if there is a remainder—if the balance of epistemic probability happens to tip ever so slightly in one direction—then the problem of fanaticism remains with undiminished force. Worse, its force might even be increased in this situation, for if what tilts the balance in favor of a seemingly fanatical course of action is the merest hunch rather than any solid conviction, then it is so much more counterintuitive to claim that we ought to pursue it in spite of any finite sacrifice doing so may entail. The “exact-cancellation” argument threatens to backfire catastrophically.
I do not happen to share Bostrom view, however. Apart from the aforementioned bounding of the influence of expected value thinking, there are also ways to avoid such apparent craziness of letting our actions rest on the slightest hunch from within the expected value framework: disregarding sufficiently low credences.
Bostrom is skeptical of this approach:
As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria. The postulation of an exception from these criteria for very low-likelihood events is, at the very least, theoretically ugly.
Yet Bostrom here seems to ignore that “the value in question” is infinite for every action, cf. the point that we should maintain some small credence in every claim, including the claim that any given action may effect an infinite amount of (dis)value.
So in this way, no action we can point toward is fundamentally different from any other. The only difference is just what our credence is that a particular action may make “an infinite difference”, or that it makes “the greatest infinite difference”, compared to any other action. And when it comes to such credences, I would argue that it is utmost reasonable to ignore sufficiently small ones. In my view, to not do that would be the ugly thing, for the following reasons:
First, one could argue that, just as most models of physics break down beyond a certain range, it is reasonable to expect our ability to discriminate between different credence levels to break down when we reach a sufficiently fine scale. This is also well in line with the fact that it is generally difficult to put precise numbers on our credence levels with respect to specific claims. Thus, one could argue that we are way past the range of error of our intuitive credences when we reach the nineteenth decimal place.
This conclusion can also be reached via a rather different consideration: one can argue that our entire ontological and epistemological framework itself cannot be assumed credible with absolute certainty. Therefore, it would seem that our entire worldview, including this framework of assigning numerical values, or indeed any order at all, to our credences, should itself be assigned some credence of being wrong. And one can then argue, quite reasonably, that once we reach a level of credence in any claim that is lower than our level of credence in, say, the meaningfulness of ascribing credences in this way in the first place, this specific credence should be ignored, as it lies beyond what we consider the range of reliability of this framework in the first place.
In sum, I think it is fair to say that, when we only have a tiny credence that some action may be infinitely better than another, we should do more research and look for better reasons to act on rather than to act on these hunches. We can reasonably ignore exceptionally small credences in practice, as we indeed already do every time we make a decision based on calculations of finite expected values; we then ignore the tiny credence we should have that the value of the outcomes in question is infinite.
Another thing Bostrom treats in his paper, actually the main subject of it, is whether the existence of infinite value implies, on aggregative consequentialist views, that it makes no difference what we do. As he puts it:
Aggregative consequentialist theories are threatened by infinitarian paralysis: they seem to imply that if the world is canonically infinite then it is always ethically indifferent what we do. In particular, they would imply that it is ethically indifferent whether we cause another holocaust or prevent one from occurring. If any non-contradictory normative implication is a reductio ad absurdum, this one is.
To elaborate a bit: the reason it is supposed to be indifferent whether we cause another holocaust is that the net sum of value in the universe supposedly is the same either way: infinite.
It should be noted, though, that whether this really is a problem depends on how we define and calculate the “sum of value”. And the question is then whether we can define this in a meaningful way that avoids absurdities and provides us with a useful ethical framework we can act on.
In my view, the solution to this conundrum is to give up our attachment to cardinal arithmetic. In a way, this is obvious: if you have an infinite set and add finitely many elements to it, you still have “the same as before”, in terms of the cardinality of the set. Yet, in another sense, we of course do not get “the same as before”, in that the new infinite set is not identical to the one we had before. Therefore, if we insist that adding another holocaust to a universe that already contains infinitely many holocausts should make a difference, we are simply forced to abandon standard cardinal arithmetic. In its stead, we should arguably just take our requirement as an axiom: that adding any amount of value to an infinity of value does make a difference — that it does change the “sum of value”.
This may seem simplistic, and one may reasonably ask how this “sum of value” could be defined. A simple answer is that we could add up whatever (presumably) finite difference we make within the larger (hypothetically) infinite world, and to then consider that the relevant sum of value that should determine our actions, what has been referred to as “the causal approach” to this problem.
This approach has been met with various criticisms, one of them being that it leaves “the total sum of value” unchanged. As Bostrom puts it:
One consequence of the causal approach is that there are cases in which you ought to do something, and ought to not do something else, even though you are certain that neither action would have any effect at all on the total value of the world.
I fail to see the appeal of this criticism, however, not least because it is deceptively phrased. For how is the “total value of the world” defined here? It is not the case that “the total value of the world” is left unchanged on every possible definition of these terms; it just is on one particular definition, indeed one we have good reason to consider implausible and irrelevant. And the reason was that it implies that adding another holocaust makes no difference to the “total value of the world”. It then seems a strange move to say that it counts against a theory that it holds the prevention of finitely many holocaust to be normative because this has no “effect at all on the total value of the world” — by this very implausible definition. If forced to choose between these two mutually exclusive starting points — adding a holocaust makes a difference to the total value of the world or it does not — I think it is an easy choice. If we can help alleviate the extreme suffering of just a single being, while keeping all else equal, this being will hardly agree that “the total value of the world” was left unchanged by our actions. Not in any sensible sense.
More than that, I also think that for an ethical theory to say that we should ignore whatever lies outside our sphere of influence should not be considered a weakness, but rather a strength. Imagine by analogy a hypothetical Earth identical to ours, with the two exceptions that 1) it has been inhabited by humans for an eternal and unalterable past, over which infinitely many holocausts have taken place, and 2) it has a finite future; the universe it inhabits will end peacefully in a hundred years. Now, if people on this Earth held an ethical theory that does not take this unalterable infinite past into account, and instead focuses on the finite future, including preventing holocausts from happening in that future, would this count against that theory in any way? I fail to see how it could, and yet this is essentially the same as taking the causal approach within an infinite universe, only phrased more “unilaterally”, i.e. more purely in temporal rather than spatio-temporal terms.
Another criticism that has been leveled against the causal approach is that we cannot rule out that our causal impact may in some sense be infinite, and therefore it is problematic to say that we should just measure the world’s value, and take action based on, whatever finite difference we make. Here is Bostrom again:
When a finite positive probability is assigned to scenarios in which it is possible for us to exert a causal effect on an infinite number of value-bearing locations […] then the expectation value of the causal changes that we can make is undefined. Paralysis will thus strike even when the domain of aggregation is restricted to our causal sphere of influence.
Yet these claims actually do not follow. First, it should again be noted that the situation Bostrom refers to here is in fact the situation we are always in: we should always assign a positive probability to the possibility that we may effect infinite (dis)value. Second, we should be clear that the scenario where we can impact an infinite amount of value, and where we aggregate over the realm we can influence, is fundamentally different from the scenario in which we aggregate over an infinite universe that contains an infinite amount of value that we cannot impact. To the extent there are threats of “infinitarian paralysis” in these two scenarios, they are not identical.
For example, Bostrom’s claim that “the expectation value of the causal changes that we can make is undefined” need not be true even on standard cardinal arithmetic, at least in the abstract (i.e. if we ignore Cromwell’s rule), in the scenario where we focus only on our own future light cone. For it could be that the scenarios in which we can “exert a causal effect on an infinite number of value-bearing locations” were all scenarios that nonetheless contained only finite (dis)value, or, on a dipolar axiology, only a finite amount of disvalue and an infinite amount of value. A concrete example of the latter could be a scenario where the abolitionist project outlined by David Pearce is completed in an eternal civilization after a finite amount of time.
Hence, it is not necessarily the case that “paralysis will strike even when the domain of aggregation is restricted to our causal sphere of influence”, apart from in the sense treated earlier, when we factor in Cromwell’s rule: how should we act given that all actions may effect infinite (dis)value? But again, this is a very different kind of “paralysis” than the one that appears to be Bostrom’s primary concern, cf. this excerpt from the abstract of his paper Infinite Ethics:
Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity.
Indeed, one can argue that the “Cromwell paralysis” in a sense negates this latter paralysis, as it implies that it may not be true that we can affect only a finite amount of good or bad, and, more generally, that we should assign a non-zero probability to the claim that we can optimize the value of the universe everywhere throughout, including in those corners that seem theoretically inaccessible.
Adding Always Makes a Difference
As for the infinitarian paralysis supposed to threaten the causal approach in the absence of the “Cromwell paralysis” — how to compare the outcomes we can impact that contain infinite amounts of value? — it seems that we can readily identify reasonable consequentialist principles to act by that should at least allow us to compare some actions and outcomes against each other, including, perhaps, the most relevant ones.
One such principle is the one alluded to in the previous section: that adding something of (dis)value always makes a difference, even if the notional set we are adding it to contains infinitely many similar elements already. In terms of an axiology that holds the amount of suffering in the world to be the chief measure of value, this principle would hold that adding/failing to prevent an instance of suffering always makes for a less valuable outcome, provided that other things are equal, which they of course never quite are in the real world, yet they often are in expectation.
The following abstract example makes, I believe, a strong case for favoring such a measure of (dis)value over the cardinal sum of the units of (dis)value. As I formulate this thought experiment, this unit will, in accordance with my own view, be instances of intense suffering in the universe, yet the point applies generally:
Imagine that we have a universe with a countably infinite amount of instances of intense suffering. We may visualize this universe as a unit ball. Now imagine that we perform an act in this universe that leaves the original universe unchanged, yet creates a new universe identical to the first one. The result is a new universe full of suffering. Imagine next that we perform this same act in a world where nothing exists. The result is exactly the same: the creation of a new universe full of suffering, in the exact same amount. In both cases, we have added exactly the same ball of infinite suffering. Yet on standard cardinal arithmetic, the difference the act makes in terms of the sum of instances of suffering is not the same in the two cases. In the first case, the total sum is the same, namely countably infinite, while there is an infinite difference in the second case: from zero to infinity. If we only count the difference added, however— the “delta universe”, so to speak— the acts are equally disvaluable in the two cases. The latter method of evaluating the (dis)value of the act seems far more plausible than does evaluation based on the cardinal sum of the units of (dis)value in the universe. It is, after all, the exact same act.
This is not an idle thought experiment. As noted above, impacting the creation of new universes is one of the ways in which we may plausibly be able to influence an infinite amount of (dis)value. Arguably even the most plausible one. Admittedly, it does rest on certain debatable assumptions about physics, yet these assumptions seem significantly more likely than does the possibility of the existence of an eternal civilization. For even disregarding specific civilization hostile facts about the universe (e.g. the end of stars and a rapid expansion of space that is thought to eventually rip ordinary matter apart), we should, for each year in the future, assign a probability strictly lower than 1 that civilization will go extinct that year, which means that the probability of extinction will be arbitrarily close to 1 within a finite amount of time.
In other words, an eternal civilization seems immensely unlikely, even if the universe were to stay perfectly life-friendly forever. The same does not seem true of the prospect of influencing the generation of new universes. As far as I can tell, the latter is in a ballpark of its own when it comes to plausible ways in which we may be able to effect infinite (dis)value, which is not to say that universe creation is more likely than not to become possible, but merely that it seems significantly more likely than other ways we know of in which we could effect infinite (dis)value (though, again, our knowledge of “such ways” is admittedly limited at this point, and something we should probably do more research on). Not only that, it is also something that could be relevant in the relatively near future, and more disvalue could depend on a single such near-future act of universe creation than what is found, intrinsically at least, in the entire future of our civilization. Infinitely more, in fact. Thus, one could argue that it is not our impact on the quality of life of future generations in our civilization that matters most in expectation, but our impact on the generation of universes by our civilization.
Universe Anti-Natalism: The Most Important Cause?
It is therefore not unthinkable that this should be the main question of concern for consequentialists: how does this impact the creation of new universes? Or, similarly, that trying to impact future universe generation should be the main cause for aspiring effective altruists. And I would argue that the form this cause should take is universe anti-natalism: avoiding, or minimizing, the creation of new universes.
There are countless ways to argue for this. As Brian Tomasik notes, creating a new universe that in turn gives rise to infinitely many universes “would cause infinitely many additional instances of the Holocaust, infinitely many acts of torture, and worse. Creating lab universes would be very bad according to several ethical views.”
Such universe creation would obviously be wrong from the stance of negative utilitarianism, as well as from similar suffering-focused views. It would also be wrong according to what is known as The Asymmetry in population ethics: that creating beings with bad lives is wrong, and something we have an obligation to not do, while failing to create happy lives is not wrong, and we have no obligation to bring such lives into being. A much weaker, and even less controversial, stance on procreative ethics could also be used: do not create lives with infinite amounts of torture.
Indeed, how, we must ask ourselves, could a benevolent being justify bringing so much suffering into being? What could possibly justify the Holocaust, let alone infinitely many of them? What would be our answer to the screams of “why” to the heavens from the torture victims?
Universe anti-natalism should also be taken seriously by classical utilitarians, as a case can be made that the universe is likely to end up being net negative in terms of algo-hedonic tone. For instance, it may well be that most sentient life that will ever exist will find itself in a state of natural carnage, as civilizations may be rare even on planets where sentient life has emerged, and because even where civilizations have emerged, it may be that they are unlikely to be sustainable, perhaps overwhelmingly so, implying that most sentient life might be expected to exist at the stage it has existed on for the entire history of sentient life on Earth. A stage where sentient beings are born in great numbers only for the vast majority of them to die shortly thereafter, for instance due to starvation or by being eaten alive, which is most likely a net negative condition, even by wishful classical utilitarian standards. Simon Knutsson’s essay How Could an Empty World Be Better than a Populated One? is worth reading in this context, and of course applies to “no world” as well.
And if one takes a so-called meta-normative approach, where one decides by averaging over various ethical theories, one could argue that the case against universe creation becomes significantly stronger; if one for instance combines an unclear or negative-leaning verdict from a classical utilitarian stance with The Asymmetry and Kantian ethics.
As for those who hold anti-natalism at the core of their values, one could argue that they should make universe anti-natalism their main focus over human anti-natalism (which may not even reduce suffering in expectation), or at the very least expand their focus to also encompass this, apparently esoteric position. Not only because the scale is potentially unsurpassable in terms of what prevents the most births, but it may also be easier, both because wishful thinking about “those horrors will not befall my creation” could be more difficult to maintain in the face of horrors that we know have occurred in the past, and because we do not seem as attached and adapted, biologically and culturally, to creating new universes as we are to creating new children. And just as anti-natalists argue with respect to human life, being against the creation of new universes need not be incompatible with a responsible sustainment of life in the one that does exist. This might also be a compromise solution that many people would be able to agree on.
Are Other Things Equal?
The discussion above assumes that the generation of a new universe would leave all else equal, or at least leave all else merely “finitely altered”. But how can we be sure that the generation of a new universe would not in fact prevent the emergence of another? Or perhaps even prevent many infinite universes from emerging? We can’t. Yet we do not appear to have any reason for believing that this is the case. As noted above, all else will often be equal in expectation, and that also seems true in this case. We can make counter-Pascallian hypotheses in both directions, and in the absence of evidence for any of them, we appear to have most reason to believe that the creation of a new universe results, in the aggregate, in a net addition of a new universe. But this could of course be wrong.
For instance, artificial universe creation would be dwarfed by the natural universe generation that happens all the time according to inflationary models, so could it not be that the generation of a new universe might prevent some of these natural ones from occurring? I doubt that there are compelling reasons for believing this, but natural universe generation does raise the interesting question of whether we might be able to reduce the rate of this generation. Brian Tomasik has discussed the idea, yet it remains an open, and virtually unexplored, research question. One that could dominate all other considerations.
It may be objected that considerations of identical, or virtually identical, copies of ourselves throughout the universe have been omitted in this discussion, yet as far as I can tell, including such considerations would not change the discussion in a fundamental way. For if universe generation is the main cause and most consequential action to focus on for us, more important even than the intrinsic importance of the entire future of our civilization, then this presumably applies to each copy of ourselves as well. Yet I am curious to hear arguments that suggest otherwise.
A final miscellaneous point I should like to add here is that the points made above may apply even if the universe is, and only ever will be, finite, as the generation of a new finite pocket universe in that case still could bring about far more suffering than what is found in the future light cone of our own universe.
Implications for Artificial Intelligence in Brief
The prospect of universe generation, and the fact that it may dominate everything else, also seems to have significant implications for our focus on the future of artificial intelligence, one of them being, as hinted above, that altruists should perhaps not focus on artificial intelligence as their main cause (and why we should be careful about claiming that it is clear that they should, as we may thereby risk overlooking crucial considerations). For instance, if artificial intelligence is sufficiently unlikely to ever “take over” in the way that is often feared, or if focusing directly on researching or arguing against universe generation has higher expected value.
Moreover, it suggests that, to the extent altruists indeed should focus primarily on artificial intelligence, this would be to the extent that artificial intelligence will determine the rate of universe generation in the universe. This might be the main thing to focus on when implementing “Fail-Safe” measures in artificial intelligence, or in any kind of future civilization, to the extent implementation of such measures is feasible.
In conclusion, the subjects of the potential to effect infinite (dis)value in general, and of impacting universe generation in particular, are extremely neglected at this point, and a case can be made that more research into such possibilities should be our top priority. It seems conceivable that a question related to such a prospect — e.g. should we create more universes? — will one day be the main ethical question facing our civilization, perhaps even one we will be forced to decide upon in a not too distant future. Given the potentially enormous stakes, it seems worth being prepared for such scenarios — including knowing more about their nature, how likely they are, and how to best act in them — even if they are unlikely.
I recently took part in a panel discussion, alongside Leah Edgerton, Tobias Leenaert, Oscar Horta, and Jens Tuider (moderator), on whether animal advocates should focus on veganism or anti-speciesism (I’ve outlined my own view here). In my opinion, the discussion went well, not least because there was a sense of a shared underlying goal among the panelists, as well as a high level of intellectual openness, humility, and friendliness.
Unfortunately, yet predictably, the limited time available for each person to speak in such a panel discussion meant that I didn’t get to make half of the points I wanted to (in spite of the fact that I, rather uncourteously, seemed to speak for a great plurality of the time of the discussion; my passion for the subject got the better of me, I’m afraid). And given that I had these unshared points written down already, it seemed worthwhile to publish them here for everyone to read.
Main Points: Scale and Receptivity
Two main points in favor of anti-speciesist advocacy that I did get to make, albeit briefly, have to do with scale and receptivity. In terms of scale, anti-speciesist advocacy is better than vegan advocacy, as well as other forms of advocacy that focus only on beings exploited by humans, in that it pertains to all non-human animals, including those who live in nature.
At an intuitive level, this may seem like a small point in favor of anti-speciesist advocacy. “+1 to anti-speciesist advocacy for being better in terms of scale.” Yet to think in this way is to fail to appreciate the actual numbers. Just as the much greater number of “farm animals” compared to the number of “pets” is a huge rather than small point in favor of focusing on the former rather than the latter in our advocacy, the much greater number of beings that anti-speciesist advocacy pertains to is an extremely significant point in its favor.
And, in terms of numbers at least, this analogy is actually strikingly accurate: the number of “farm animals” is, on some estimates, about a thousand times greater than the number of “pets”, while the number of non-human animals in nature is about a thousand times greater than the number of “farm animals”. A thousand times is a lot, and yet this is only counting vertebrates; the number is much greater if we include invertebrates in our considerations as well, as we should. In other words, if we include invertebrates in our considerations, the analogy to the ratio between “farm animals” and “pets” is actually much too weak. Yet our intuitions have a hard time appreciating such big numbers. Especially when the beings in question live in nature.
Thus, in terms of scale, the actions of many aspiring effective animal advocates may be more akin to donations to local animal shelters than they would like to think. This is not surprising. We humans are notorious group thinkers, and the animal movement has traditionally focused only on beings exploited by humans. Consequently, we should expect this history to bias us strongly toward that focus (objections to this controversial paragraph, e.g. “we should focus on beings exploited by humans first”, may be found answered here and here, as well as in the section on objections below).
The other main point in favor of anti-speciesist advocacy has to do with people’s receptivity toward anti-speciesist advocacy. In light of the above, one may think “sure, anti-speciesist advocacy is best in terms of scale, but will people be receptive to such advocacy? Isn’t it too abstract?”
This is an empirical question, and more research on it is urgently needed. Yet there are at least tentative reasons for thinking that people in fact are receptive to such advocacy, perhaps even more so than toward most other forms of advocacy. One line of evidence comes from Oscar Horta, who has delivered talks on speciesism and conducted surveys after these talks, whereby he found that, surprisingly, “most people who attended these talks accepted the arguments against speciesism.” Horta made more interesting findings than this, including that a focus on speciesism may be the best way to promote veganism, yet given that I have already reported on some of these findings elsewhere, and linked to his own report above, I shall not delve further into it here.
Another line of evidence comes from a study conducted by Vegan Outreach in 2016, in which they tested four different booklets against each other, one of which focused on the case against speciesism (another one was centered around a “reduce your consumption” message, another on the harms that “farm animals” suffer), and then examined which of them led to the greatest reduction in consumption of “animal products”. The results, in a nutshell, were that all the booklets caused a significant reduction in such consumption among readers, and that the booklet that focused on speciesism did the best of all the booklets, although the difference was not statistically significant.
In light of this (admittedly limited) data, we have reasons to think that, even if our only goal were to make people reduce their consumption of “animal products”, focusing on the case against speciesism is at least roughly as good as other, more traditional forms of advocacy.
And yet such a narrow focus cannot be defended. As I also argued during the panel discussion, we have an unfortunate tendency in our movement to view “total consumption of animal products” as a good measure of the quality of the (non-human) sentient condition on the planet, or at least of “how good we’re doing”. It is not. It only says something about a tiny fraction of the non-human beings on the planet, and we cannot defend excluding the rest, those not exploited directly by humans, in our considerations. This is not to say that such consumption is not an important measure to look at, merely that it is hopelessly insufficient.
In conclusion, when we combine these two considerations — a much greater scope in terms of the number of beings our advocacy pertains to, as well as a level of receptivity toward anti-speciesist advocacy that seems roughly as good as that of other forms of advocacy; and perhaps it is even better — we seem to have good reason to focus on anti-speciesist advocacy. And if we then factor in the neglectedness of such advocacy compared to the forms of advocacy and tactics we have traditionally been pursuing, including tech innovation such as in vitro meat, which has millions of US dollars in funding, the case becomes stronger still.
Objections Against Doing Anti-Speciesist Advocacy
But What About the Tractability of the Problem of Suffering in Nature?
While it is true that anti-speciesist advocacy seems optimal in terms of scale because it also includes wild animal suffering, one may object that the tractability of suffering in nature has been left out of the picture in this analysis.
In response to this, I would argue that the tractability of the problem of suffering in nature is highly uncertain at this point, yet given that the number of “wild animals” is more than a thousand times that of “farmed animals”, it would seem that the tractability of the problem of suffering in nature would have to be less than a thousand times smaller (to the extent we can meaningfully say such a thing) than the tractability of the problem of suffering for “farmed animals” in order for it to make sense to focus on the latter over the former. It is far from clear to me that this is the case. More than that, this all seems to rest on an assumed need to focus on one over the other, which then leads to the second point I would make in response to this objection.
For I do not believe we have to focus on one over the other. Anti-speciesist advocacy defends both “farmed animals” and “wild animals”, and, as seen above, it may be as successful with regard to the former as other forms of advocacy, implying that, even given high uncertainty concerning the tractability of wild animal suffering, anti-speciesist advocacy still seems a strategy worth pursuing. Again, in light of the notes on receptivity above, one could make a case that, even if we only cared about the wrongs done to beings exploited by humans, we should focus on anti-speciesist advocacy.
Similarly, even if there is a conflict between a focus on “wild animals” versus “farm animals, and even if suffering in nature indeed were a thousand times as intractable as suffering caused by direct human exploitation, the much greater neglectedness of wild animal suffering would make a case for doing advocacy that pertains to that, as anti-speciesist advocacy does.
I Don’t Think Non-Human Individuals in Nature Have Net Negative Lives
Opposing discrimination against individuals in nature in general, and defending the claim that we should help them to the extent we can in particular, does not rest on the claim that such beings live net negative lives, any more than the claim that we should not discriminate against other human individuals, and help them when we can — for instance under circumstances of famine or other catastrophes — rests on the claim that such humans have net negative lives.
(That being said, I have made a theoretical case for wildlife anti-natalism here, in which I argue that merely applying a non-speciesist position on procreative ethics implies that we should, in theory/if we can keep other things equal, prevent the births of the vast majority of non-human individuals in nature. More than that, I think we do tend to significantly underestimate how bad [at least many of] the lives of non-human beings in nature in fact are.)
Another point I would make in response to this claim is that even on the conservative assumption that only one in ten non-human individuals in nature have lives as bad as the average non-human individual cursed to live out their life on a factory farm, the big difference in terms of the number of beings in nature versus on factory farms still implies that there is more than a hundred times as many non-human beings living very bad lives in nature than there is on factory farms, meaning that even given such a relatively small “concentration of suffering” in nature, the greatest opportunity for reducing the most total suffering still lies here.
Isn’t Anti-Speciesism too Abstract?
More specifically: don’t we risk turning people off by seeming to claim that, say, a mosquito has the same moral value as an elephant?
I would make a few distinct points in response to this objection. First, to the extent this is a problem, we can say that anti-speciesism does not imply that all beings should be prioritized equally, just as total opposition to discrimination within the human species does not imply that, say, a human fetus has the same moral value as an adult human individual. The specific traits of a being do matter, and anti-speciesism does not demand us to overlook these differences, but rather to prioritize equal interests equally.
Second, I would argue that, to the extent anti-speciesism promotes more concern for smaller beings compared to other forms of advocacy, this is actually one of its main strengths rather than a weakness, as we generally underestimate the moral value of small beings. One way to see this is to consider the numbers. If we take fish, for instance, it is estimated that there are 10,000 times as many fish on the planet as there are humans, yet fish do not tend to weigh correspondingly strongly on our moral scale, even among animal advocates.
And if we consider invertebrates, our focus seems even more misaligned still, as it is estimated that there are about ten quintillion insects on the planet, ten to the power of nineteen, and yet we fail to take them seriously in moral terms for the most part. One might then object that the number of beings is not a good measure of moral value. Rather, one may argue, we should look at the total number of neurons for a better measure. Yet even if we adopt this as a proxy for moral value, the moral weight of the insect realm still appears staggering, as there are, on a rough estimate at least, a hundred times more insect neurons on the planet than there are human neurons.
(I am not claiming that the number of neurons is a perfect proxy for moral value by any means, but merely that no matter which of these simple, and probably not entirely meritless, measures we use, we appear to underestimate small beings a lot; Brian Tomasik’s Is Brain Size Morally Relevant? is quite apropos here, although I should note that I strongly disagree with his view of consciousness.)
Why we underestimate smaller beings is a question worth pondering, I think, and I believe we can readily identify at least three reasons. First, small beings, such as fish and insects, tend to be more numerous, which makes greater moral concern for them inconvenient, and we are generally biased against inconvenient updates. Second, smaller beings are generally very different from us in terms of what their bodies look like, which makes it more difficult to empathize with them, even disregarding the size difference. For instance, feeling empathy for a chimpanzee-sized insect or fish seems more challenging than feeling it for a chimpanzee. Third, the size difference itself seems likely to make us more biased against smaller beings as well. Compare the difficulty of feeling compassion for a normal-sized chimpanzee versus feeling it for an ant-sized chimpanzee. Or for a lobster versus an ant; the latter actually has more than twice as many neurons as the former.
Another distinct point I would make in relation to this objection is that the case against speciesism is very similar, in terms of its form, to the case against racism, and most people seem to accept the latter today, implying that there may be much ready potential we can tap into here. The argument against racism does not seem too intellectually advanced for most people, which provides an additional reason to question the intuitive assumption that the case against speciesism necessarily must be too advanced or abstract for most people to follow it (along with the non-peer-reviewed studies cited above that tentatively hint the same). More than that, the philosophical case against speciesism also happens to be exceptionally strong, much stronger than we animal advocates tend to realize — the literature that argues in favor of speciesism is surprisingly thin and weak — and I think we ignore this strength at our peril. We have a powerful tool at our disposal that we refuse to employ.
Anti-Speciesism Is Often Better than (Naive) Consequentialist Calculations
If one is a wannabe consequentialist rationalist, it is easy to be misguided about where much of our moral wisdom comes from, by imagining that we have gained it via clever deductive consequentialist analyses. Yet for the most part, this is not the case. Our rejection of racism today, for instance, is mostly due to cultural evolution, including lessons from history, that has accumulated gradually; it has not primarily been due to consequentialist arguments (to the extent arguments have played a crucial role, it seems to me that they have rather rested on consistency). As a result, we have now arrived upon a moral wisdom that is deeper, I believe, than what a simple chain of consequentialist reasoning could have readily produced prior to this cultural change (after all, how would you make a solid consequentialist case that human slavery is wrong? It is not easy. And if you can, would it apply equally to the property status of non-human individuals? If not, why?).
And I think the same applies to anti-speciesism: it tends to be wiser than naive consequentialist analyses. It provides us with a free download of the full package of the moral progress we have made over the last few centuries with respect to human individuals, ready for us apply to non-human individuals by simply using the heuristic “what would we do if they were human?” With this package installed, we can quickly gain wise views on many ethical issues pertaining to non-human beings, including “happy meat” and veganism — it provides a clear case against and for them respectively. One could be forced to spend a long time arguing for these conclusions otherwise, if one were to insist on employing directly consequentialist arguments, even though these conclusions arguably are what a complete consequentialist analysis would recommend (I believe Brian Tomasik would mostly disagree, although he would do so for complicated reasons).
New Information: Have We Updated Sufficiently?
Something I think we should be wary of is when we build up our views on a matter over a long period of time, and then encounter a new piece of crucial information that makes us change our outlook completely, yet without properly updating the aforementioned views we have built and consolidated over a long period of time in the absence of this crucial information.
To be more concrete: I think many animal advocates have spent a lot of time thinking hard about how to best advocate for non-human animals so as to reduce their suffering as much as possible. Unfortunately, what they have been thinking hard about has “merely” been what we should do in order to reduce the suffering of non-human beings exploited by humans, and they have then built up their preferred strategy for advocating for non-human individuals based on this outlook. A positive thing that has then happened for many of these advocates is that they have become convinced of the importance of wild animal suffering; this would be the “crucial piece of information” in the more general statement of this “updating problem” above.
This has changed the outlook of these advocates completely in some ways, yet it seems to me that their preferred strategy in terms of advocacy has remained suspiciously unchanged, which should give them pause. We have had our minds opened to this piece of information that changes everything: the vast majority of beings is not found in the realm we have been focusing on for all these years. Yet the ideal form of advocacy somehow remains exactly the same as before we came upon this information, advocacy that pertains exclusively to the beings we used to think were the only beings whom we owed any obligations.
This makes little sense, although one can say that, in one sense, it makes perfect sense. A view on a matter that one has built over many years is unlikely to be changed over night, especially if one has thought a lot about it. Yet this is a psychological explanation of the phenomenon in question; it is not an explanation that defends it as reasonable in any way.
In conclusion, I would encourage all animal advocates to reflect upon whether they have factored in the obligations we owe to non-human individuals in nature in their current view of the best advocacy strategies and tactics. As far as I can tell, virtually none of us have.
Are there different possible outcomes of the future? Or phrased more broadly: do ontological possibilities exist? I think this is a profound question, and much may depend upon our answer to it. For instance, if ‘ought’ truly implies ‘can’, and the only real ‘can’ there is is whatever in fact happens, it would seem to follow that the only thing that could ought happen is what in fact happens. That whatever happens is what ought to happen, if anything.
Ontological possibilities here stand in opposition to what we may call hypothetical possibilities. That is, we are clearly able to think in terms of different outcomes being possible, and to then plan and take action based on such thinking, but that does not imply that those outcomes were ever actual possibilities, as opposed to purely thought up ones that just serve as a thinking tool.
Worth noting is it that there seems to be a contradiction between two widely shared views that pertain to the existence of ontological possibilities. For on the one hand we have what appears a widely accepted distinction between necessary and contingent truths, necessary truths being ones that must be true because negating them would imply a contradiction with reality (commonly cited examples are 2+2=4 and syllogisms), while a contingent truth is one that could have been false, as its negation (supposedly) does not imply a contradiction with reality (e.g. “Life evolved on Earth”, “Hillary Clinton lost the 2016 election”).
Yet this does not seem consistent with another prevailing belief, namely that the entire world unfolds according to deterministic mathematical equations. If the latter is true, then any truth about the world would appear a necessary one, as its negation indeed would imply a contradiction with the fundamental equation(s) just as much as 2+2=5 does.
So what are we to make of this? What could possibly be true about ontological possibilities?
Why Ask This Question?
My reason for examining this question here has to do with ethics – more specifically, it has to do with an objection one might be tempted to level against the sensibility and meaningfulness of ethics. For, cf. the ought-implies-can note above, one might claim that ontological possibilities are necessary in order for ethics proper to get off the ground, and indeed for engagement in ethical reasoning, decision-making, and action of any kind to even make sense.1 A combustible and controversial claim for sure, yet I shall entertain no discussion of it here. Instead, my aim here is to argue that if one thinks ontological possibilities are required for “the meaningfulness of ethics” – i.e. required in order for engagement in ethical reasoning, decision-making, and action to make sense – then one cannot reasonably reject such “meaningfulness” with the claim that ontological possibilities are not real. The reason being, in short, that we simply do not know whether such possibilities are real or not, and, as far as I can tell, we all but surely never will.
The Nature of the World: Does Quantum Mechanics Preclude – or Describe – Ontological Possibilities?
In order to say whether ontological possibilities exist, it seems apt to look toward what is arguably our most fundamental and well-tested theory of the world, namely quantum mechanics, and see what it has to say about it.
The answer is that it depends on which interpretation of quantum mechanics is correct, a matter concerning which there is much disagreement among experts. And in light of such disagreement, it only seems reasonable to maintain a substantial degree of uncertainty concerning which interpretation is correct.
As for how one should break this uncertainty down more specifically, there seems plenty of room for reasonable disagreement. For instance, assuming the formalism of quantum mechanics indeed does describe the world in the first place, it seems defensible to place 50 percent probability on the claim that none of the well-known interpretations are correct, and to distribute the remaining 50 percent of one’s credence among the (more or less) mainstream interpretations. This could then be done based on how many subscribers these respective interpretations have among experts, in which case it seems one should assign roughly equal credence to the so-called Many Worlds Interpretation and the Copenhagen Interpretation – perhaps 20 percent to each – while distributing the remaining 10 percent to the remaining ~10 interpretations, resulting in about one percent credence in each of them.
That would be one way one could do it. I don’t necessarily agree with the exact numbers in this distribution; it is just an example that seems within the bounds of defensibility. What does not seem defensible, however, is to maintain complete certainty in the veracity of any one particular interpretation. And this is true for various reasons. For not only are there many competing interpretations that all seem to have at least some strengths and weaknesses, and not only can the possibility that the correct interpretation is a yet unformulated interpretation not be ruled out; the possibility that no interpretation of the formalism is true also seems a very open one.
After all, we have seen it happen before that a prominent physicist considered the, admittedly impressive, canon of physics of his time complete but for a couple of anomalies, only for those anomalies to then revolutionize physics completely within a few years. How can we maintain near-complete confidence that the same could not happen with the physics of our time? Indeed, who is to say that quantum mechanics isn’t still just one of the outer layers of the onion, no less a parochial approximation of the underlying dynamics of reality than classical mechanics ultimately?
Where does all this leave us with respect to ontological possibilities? It leaves us with substantial uncertainty. In particular, it leaves us uncertain about whether the world unfolds according to deterministic mathematical equations or not. Out of the 14 (at the time of writing) established interpretations of quantum mechanics listed on this Wikipedia page, only four are deterministic, seven are indeterministic, while three are agnostic. Consequently, although the deterministic interpretations include the relatively popular Many Worlds Interpretation in which all the mathematically possible outcomes of the formalism are realized, it seems overconfident to have anything near complete certainty that the right interpretation, to the extent there is one, is deterministic.
Indeed, among the 14 interpretation cited above, one of them, the transactional interpretation, is, at least as its proponent Ruth Kastner lays it out and defends it, explicitly realist about ontological possibilities – a “Many Possibilities Interpretation”, if you will.2 Thus, according to this interpretation, the mathematical formalism of quantum mechanics is in fact the mathematics of ontological possibilities (apropos, perhaps also see possibility theory).
Now, my purpose here is not to settle which interpretation that is most plausible, much less which one is correct, but simply to argue that near-total confidence in the truth of deterministic interpretations is not defensible. On the rough sketch of a credence breakdown above, for instance, the transactional interpretation was given about one percent probability, and I don’t think one can defend giving it several orders of magnitude less than that. And even if one could, there are still other indeterministic interpretations that also deserve at least some weight, as well as a big space of “yet unknown interpretations that could be true” – what was assigned a 50 percent probability above – of which a substantial chunk should be expected to consist of indeterministic interpretations.
Combining these considerations, it seems overconfident to maintain a 99 percent credence in the proposition that the right interpretation of quantum mechanics is deterministic, and the same can reasonably be said, I believe, of any such credence above 95 percent as well.3
One could of course argue for uncertainty about this question through other routes as well, such as by invoking the idea that we always should maintain at least some degree of doubt, however minute, about any claim (an idea I have defended here), or the related claim that our entire conception of reality – not merely our physics – might be wrong, or perhaps not even that. Yet, as the argument above should make clear, one need not resort to these claims in order to reach the conclusion that it is reasonable to have some degree of uncertainty regarding the matter of determinism and ontological possibilities, and, what is more, that this uncertainty is in fact relatively substantial, i.e. not near-zero.
Implications for (Objections Against) the Meaningfulness of Ethics
The conclusion above means that, if ontological possibilities are required for “the meaningfulness of ethics” – the conditional assumption that was our point of departure – the uncertainty we should maintain concerning the reality of such possibilities means that a rejection of ethics on the basis that ontological possibilities do not exist is unwarranted.
To be sure, one may reasonably argue that, if ontological possibilities are required for “the meaningfulness of ethics”, then it seems likely that no such meaningfulness exists. Yet that is a far cry from a refutation of such meaningfulness. For consider by analogy the claim that risks of terrible future outcomes are low, and hence that such outcomes most likely will not be realized. Even if such a claim were true, it by no means follows that such risks can reasonably be dismissed.
When the stakes are sufficiently high, it is not reasonable to dismiss low probabilities. And when we are discussing the meaningfulness of ethics, it seems that the stakes could not possibly be greater, as the subject in question comes down to the difference between whether there indeed are any stakes – and who knows how big they might be – or no stakes at all. In light of such stakes, even extremely low probabilities should be taken seriously; and yet the level of uncertainty we found reasonable to maintain concerning ontological possibilities was not extremely low, much less Pascallian by any stretch.4 Thus, if ontological possibilities are indeed required for the meaningfulness of ethics, the epistemic possibility of the reality of such ontological possibilities should be taken very seriously indeed.
Moreover, when considering the outcomes of the options before us, an emerging asymmetry appears to make the choice clear. For if ontological possibilities are real, and ethical action indeed amounts to trying to realize the best of these possibilities – to create the best ontologically possible world, if you will, or at the very least avoid the worst ones – it would seem that we have good reason to try to act accordingly, and no compelling reason not to. If, on the other hand, ontological possibilities are not real, trying to realize the best possible world appears to have no cost. We thus seem to have a strong reason in favor of trying to create a better world and no reason against it.
Lastly, if we entertain the negation of the assumption that served as our starting point, namely that ontological possibilities are not required for the meaningfulness of ethics – again without saying whether this claim is true or not – we appear to arrive at the same conclusion: we have no reason to consider ethics meaningless or to not try our best. In conclusion, no matter our starting point, the meaningfulness of ethics seems on firm ground.
1. Alternatively, I could also say “the meaningfulness of trying to improve the world” or of “trying to improve one’s own situation.” These things ultimately all mean the same in my view, cf. You Are Them. Yet even if one considers these statements different, the argument I make here still applies equally to them all. That is, one can readily swap, say, “the meaningfulness of ethics” with “the meaningfulness of trying to improve one’s own situation”, and the argument would run just the same.
2. Although it should be noted that other indeterministic interpretations also hold ontological possibilities to be real, at least implicitly.
3. I have not discussed eternalism here, which is a deterministic view, but suffice it to say that, at the very least, one ought to maintain substantial doubt on this matter as well, one reason being that the mere fact that the physical equations do not appear to require an ontologically real present does not, contrary to what seems widely believed, imply that there is no ontologically real present. Much confusion about this issue seems to emerge from the belief that “ontological present” must mean “all clocks show the same”; presentism in that sense is surely as dead as can be, but presentism in general is not – there is no contradiction whatsoever about an ontological present in which there are (initially synchronized) clocks that show different times.
Beyond that, to turn the tables a bit, one might also ask why someone who holds an eternalist view would act to influence the future rather than the past, given that past and future both already exist on this view. It seems to me that eternatlists are aligned with common sense rather than their own view of time in this respect. Also, to what extent does it make sense to say that all moments exist “always”? After all, doesn’t “always” refer to something occurring over time? The meaning of claims of the sort that “every moment exists always” is, I believe, less obvious than proponents of eternalism appear to think, and seems in need of unpacking.
Yet, once again, the main point I wish to drive home here is not that people should consider presentism most plausible, but merely that we should maintain substantial uncertainty concerning this question as well.
4. What might perhaps be considered Pascallian, or at least more so, is the proposition that we are living in the multiverse described by the Many Worlds Interpretation and that ontological possibilities exist within this multiverse. Yet in this case the stakes appear to become more than great enough to justify even Pascallian probabilities. Hence, this possibility appears worth taking seriously as well.
In this piece I shall defend what may appear an unusual thesis, namely that all reasoning is ultimately based on induction, and hence that induction is the only way in which we ever know anything. By induction, I here mean what seems right in light of the doubtable data/experience we have accumulated so far. In everything from logic and mathematics to philosophy and psychology, this is invariably how we evaluate what is true. Or so I shall argue.
How can we be sure that the patterns we have reliably observed in the world so far will also exist in other times or places? How can we justify the assumed uniformity of the world that induction seems to rest upon? How can we trust induction when it cannot be deductively justified? This is the problem of induction in a nutshell.
What is interesting, however, and seemingly universally missed, is that exactly the same problem is staring us in the face when it comes to deduction. Logical deductions are also part of the world, and hence to assume that they will be valid in all times and in all realms is therefore also to assume that the world is uniform in certain ways. It is the exact same assumption, so why is it considered problematic in the case of induction but not in the case of deduction? What is the source of this discrimination?
The answer, I think, is that it just seems true that deduction is universal, and that the opposite claim — that logic is not universal — seems to make no sense. I certainly share this impression, but this does not render deduction wholly undoubtable. We may reasonably have confidence in the statement that logical deductions are universal, but we should be clear that the basis of this belief is itself merely that it seems reasonable to suppose this given that our minds apparently cannot make sense of anything else. More than that, we should also be clear that we then in fact do accept the uniformity of the world (or perhaps assign a high probability to this claim being true), and that we do it on the basis that it just seems reasonable.
Another aspect of the problem of induction is that induction merely is assumed to be valid, and that attempts at justifying it always seem circular. Yet again, how does deduction compare? How do we justify deduction? With deductive arguments? That would be circular as well. With brute assumptions? If so, why is it more problematic to assume the validity of induction?
There really is no fundamental distinction. We accept both induction and deduction because they seem right. Deductions seem obviously reasonable and valid while inductive inferences seem fairly reasonable and probably valid. The only difference, it seems, is the degree of obviousness, a difference I shall try to explain below.
One way to realize the conclusion sketched out above is by recalling the fact that all our beliefs reside in memory. And we know that 1) our memory consists of information we have gathered over time, and 2) our memories can be unreliable. There is nothing logically problematic about this; indeed, this is common knowledge. Yet it implies something rather significant, namely that all our beliefs, including those about logic, are doubtable, and that all our beliefs are a matter of what seems right in light of the doubtable data/experience we have accumulated so far.
This applies to all knowledge, whether inductively or deductively inferred (as we shall see, the latter is a subset of the former). Mathematical proofs, for instance, are often claimed to be certain knowledge, yet our knowledge of mathematical proofs is also contained in memory. And since all mathematical proofs we know of are stored in memory, and since memory is fallible, it follows that our belief in any mathematical proof we hold to be valid is, in fact, fallible.
The idea that mathematical knowledge is certain and rests only on deduction is indeed ridiculous. Take for instance the proof of Fermat’s Last Theorem: a small fraction of professional mathematicians actually fully understand this proof, yet in my experience, virtually all mathematicians will say that we know that Fermat’s Last Theorem is true. And this is probably a highly reasonable belief, but let us be clear about how we know it: by trusting the expertise of other mathematicians. And such trust is transparently based on induction; it is not based on deduction. More than that, we know, inductively, that this inductively based trust is fallible.
A famous example would be Alfred Kempe’s proof of the four-color theorem, presented in 1879, which was widely accepted until it was shown to be incorrect in 1890. Another example is Gauss’ proof of the fundamental theorem of algebra, a proof Gauss himself obviously held to be valid, as did many other mathematicians, yet it was not completed until more than a hundred years after Gauss first published it.
So our mathematical knowledge clearly relies strongly on induction, in that we trust others. Indeed, I would argue that, in practice, a majority of the mathematical knowledge any mathematician knows is known based on such trust in others rather than their own deductions. Yet to think that we rely on induction merely when it comes to trusting others in the pursuit of what we call deductive knowledge is to miss the point. For the point is that this applies to all mathematical knowledge, including when we have made all the deductions ourselves. There is no fundamental distinction between when others have made the deductions and when we have made them ourselves. In both cases, we trust conclusions made by fallible minds, stored in a fallible memory.
This of course isn’t to say that such trust is unreasonable, yet the nature of this trust should not be missed: it rests on induction. There is no deductive argument that proves our memory to be reliable. Rather, we merely assume the reliability of memory, and 1) this is an assumption that we cannot not make, 2) it is an assumption that all deduction, indeed all knowledge in general, rests upon, and 3), to repeat the point made above, this assumption rests on induction.
Let me explain and justify all these claims in turn. To start with 3), to assume that our memories in this present moment are valid rests on the assumption that the information we have stored in memory earlier still applies. This projected extension of the limited information we know is the core of induction. As for 2), it is trivial that all knowledge, including that derived from deduction, rests on the reliability of memory, since that is where all our knowledge is stored. So to say that we know anything about anything is to assume the validity of our memory — or at least the validity of some aspects of it; more on this below. Lastly, 1), the assumption that we can trust our memory is an assumption we cannot not make because our memory is the position from which we see the world. To even doubt this assumption requires trusting it, since one must then at least trust that one doubts.
“Yet we know our memory to be profoundly unreliable, don’t we?”
Yes, but it is not entirely so, and that is the point. For in order to even discover that our memory is not (entirely) reliable, we must assume that at least some aspects of our memory are — at the very least those aspects of it that hint that our memory is not entirely reliable. In other words, the discovery of the imperfect reliability of memory rests on its partial reliability.
So believing that we cannot trust any aspect of our own memory is nothing less than logically impossible, since such a belief — indeed any belief — itself resides in memory, and thus rests on its (at least partial) reliability. And given this status of logical impossibility, the belief that we cannot trust any aspect of our memory must be considered false with at least the same certainty that we place in other logical conclusions. Indeed, if possible, it should be granted even higher status, since all other beliefs, including purely logical ones, rest upon its falsity, namely that we can trust (at least some aspects of) our memory. That’s right: all deductive knowledge rests on the reliability of memory, and this reliability rests on the validity of induction [again, this was 3) above]. Conclusion: Deductive knowledge rests on the validity of induction.
Indeed, the reason we trust deduction is ultimately inductive. For deductions are also, I would argue, experiments that we run in our heads, albeit experiments that reliably produce the same result. We therefore inductively conclude that they will keep on doing the same. What we usually consider matters of induction — for instance, we have observed a thousand white swans; should we expect the next swan to be white given all that we know about the world, including the fact that there are other birds who are not white? — is just different in that we are in a realm where our information seems a lot more incomplete. It is ultimately of the same form.
This also explains the difference in the status of certainty we ascribe to deduction and induction mentioned above: deduction seems obviously reasonable and valid because the experiment goes right every time, as far as we can tell, while (what we usually call) induction seems fairly reasonable and probably valid because it works well most of the time.
So the reason, I believe, that Hume found deduction more valid than induction, and found induction so much more problematic, was, ironically, because induction recommends the former more strongly. Hume’s objection to induction is really an adventure in self-contradiction — in many ways. For instance, the great man claimed, based on his own brain’s reasoning, that a universal rule cannot be derived from particular instances, yet what is this if not itself a universal rule derived from particular instances (of reasoning in his brain)? What is this if not a glaring self-contradiction?
Try as you might, in the realm of belief, there simply is no denying the validity of induction. Again, in order to even express doubts about the validity of induction, one must inescapably rest on what one is trying to doubt, as one then inductively assumes that doubt is a meaningful concept in this moment (it has been so far), that the others whom one expresses one’s doubts to will understand a word of what one says (they have so far), that there still is a problem of induction (it seems there has been so far), etc. Indeed, all beliefs rest on induction, as they rest on the assumption that the justification we have acquired for them in the past still applies in the present, including belief in notions of past, present, and future in the first place, not to mention (tacit) belief in there being such a thing as logic, truth, and falsehood — the ideas that constitute the entire framework in which discussions about induction occur.
“So what justifies induction, then?”
Nothing. In order to even enter the realm of trying to justify something, we have already accepted induction. In asking for a justification for induction, we ask from a position of unacknowledged acceptance of it. Indeed, what justifies the belief that there is a need to justify induction — a belief that itself rests on induction? Nothing. If we believe anything at all, we are already way past the point of accepting induction, knowingly or not. So to the extent we admit of having any beliefs at all, we admit of the validity of induction. We are fundamentally confused about where in our hierarchy of beliefs induction enters the picture. The answer is: underneath it all.
To say that reliance on induction is inevitable is obviously not to say that all inductive inferences are valid. So how do we know valid inductive inferences from invalid ones? Via induction, of course.
In a nutshell, we (ideally) assess the truth of a statement in light of all the information we have in our memory — the totality of what we know. This is all we got, and hence all we ever can evaluate truth claims based on. The more the doubtable data points we have accumulated point our beliefs in a certain direction, the stronger those beliefs are, or at least should be.
For example, the claim that the sun will rise tomorrow is a claim that we believe because it fits with, indeed is predicted by, everything we know, from the totality of humanity’s knowledge of physics and astronomy to our everyday experience.
In the same way, we can deem inductive inferences false. For instance, the claim that the sun will always keep rising because it has done that so far is obviously not true, and the way we know this is again via induction: we know of underlying physical principles that “govern” the physical macro patterns that are the dynamics of stars and planets, and these principles, along with astronomical observations of stars elsewhere, imply that the age of our solar system will indeed be finite. That is what all the data points to.
The commonly cited examples of “hard problems” for our (inevitably) inductive reasoning are all problems that arise from paying attention to a too narrow channel of information. For instance, when we say that every swan we have ever seen is white, and therefore all swans must be white, this is simply a bad inference that fails to keep other relevant facts in view, such as the size of our sample, the size of the Earth, and the fact that there are other birds who have a different color, a fact that is relevant when we keep in mind the additional fact that there is a high degree of similarity in patterns across species.
“But what if we did not know about these additional facts? Then the inference seems reasonable.”
First, it should be noted that if we were in that position, we would be ignorant to a degree that is hard for us to imagine as creatures who know a lot. Second, if we were in such a position of knowing virtually nothing, we indeed should be very careful to make general conclusions about the world with confidence. If you have seen a thousand swans, and they have all been white, it seems reasonable to expect that the next one you see will be white as well, but it by no means implies that all swans are.
“But couldn’t our inductive reasoning be wrong, even when we know a lot and we consider the totality of what we know?”
This is possible, yet, as we know inductively, e.g. from statistics, the more we know, the less likely such mistakes are. It is also worth noting how we know of the possibility of the fallibility of inductive inferences in the first place, namely via induction. We know that apparently solid patterns can break because we have witnessed it before. Nations that seemed strong suddenly fell, people who were right about many things were suddenly wrong, proofs that seemed valid were shown not to be, etc. We have observed this meta pattern of patterns sometimes breaking when we don’t expect it, which has taught us, inductively, to be more open-minded about the possibility of the breaking of even apparently solid patterns. It is always induction that teaches us epistemic modesty.
So it is due to inductive reasoning, not in spite of it, that we seem to have some reason to be agnostic concerning the generality of patterns we consider general, such as whether the cosmos looks the same everywhere across time and space — a question that is currently debated among physicists and cosmologists. What we can say here seems much like what we could say as the ignorant swan observers we imagined ourselves to be above: it seems reasonable that the time and space in the proximity of that which we have observed to unfold in certain law-like ways will also unfold in such ways, but we cannot confidently claim that this applies to all time and space.
The Source of the Problem: A Narrow and Confused View of Knowledge
As mentioned above, a narrow focus on certain data and beliefs about the world, as opposed to a focus on the totality of what we know, is the source of many problems in epistemology, including Goodman’s new riddle of induction and the traditional problem of induction itself. In the case of Goodman’s new riddle of induction, the problem is, in a nutshell, that we have no reason to believe that properties such as grue and bleen exist in light of all that we know about physics, as their existence would essentially require a change in the laws of physics that we have no reason to believe possible. So it is not the case that these two hypothetical properties constitute a deep problem for induction; the suggestion that things could be grue or bleen merely constitutes an extremely unlikely hypothesis about the world.
As for the problem of induction itself, a narrow focus is also to blame. Hume made the following claim: “That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.” Yet that this proposition “implies no more contradiction” is simply wrong, since it contradicts pretty much everything we know in fields such as astronomy and physics. And if you can contradict all this, why not also contradict history and claim that there never was a guy named David Hume, and that nobody has ever raised any so-called problem of induction? After all, this is certainly “no less intelligible” or plausible than the claim that the sun will not rise tomorrow. Or to take a more traditional inductive problem: why believe that there is any problem of induction in this moment or the next one just because it seems that there has been in the past? Indeed, why not contradict logical conclusions themselves?
This is surely what Hume means: the claim that the sun will not rise tomorrow seems to imply no logical contradiction, yet this dichotomy between logical and physical knowledge is, I would argue, ultimately misguided. First, in ontological terms, there is no evidence for the existence of some separate logico-mathematical world apart from the physical one — mathematical truths are found in and by the human mind, and given that the human mind is physical, it follows that mathematical truths are found in and by the physical. Second, as mentioned above, in epistemological terms, both what we consider mathematical and physical forms of knowledge ultimately share the same inductive basis — they are stored in our memory based on what we have experienced — which is yet another reason not to strongly privilege one over another, as Hume does. In sum, there is no justification for Hume’s narrow focus on, and privileging of, deductive reasoning and knowledge — his belief that only (what we categorize as) logical truths are valid. Again, deductively based beliefs, like all other beliefs, also rest on induction in the first place.
How do we know that we are conscious, or that two plus two equals four? The answer, I would argue, is simply that it appears clear from our experience that this is the case. We ultimately have no deeper justification than this.
And this answer actually does not change when we ask more complicated questions, such as how we know that the Earth is round, or what the name of the current president of the United States is. We know because of experiences that have shaped, and in significant ways are now part of, our present experience from which it just seems obvious what the answer is. We may be able to express a long chain of reasons that compel us to hold the belief we hold, yet at the bottom of this elaborate chain, all we ultimately have is a set of conscious impressions of belief. Or doubt, for that matter, if we don’t happen to know the answer, but the basic mechanics are the same: we weigh our experience and read off from it what our state of belief — or doubt — is; itself a fact about the world.
Every chain of explanations must end somewhere, and, when it comes to our knowledge, the rock bottom of this chain is found in our direct conscious sensations. Ultimately, we do not have a deeper justification for what we know than this: it seems that way from our conscious impressions. This form of foundationalism is, I submit, the solution to the so-called Münschhausen trilemma concerning how we justify what we know.
This is not to say that we cannot question and correct our impressions. We clearly can, as the correction of illusions and biases exemplify, yet our knowledge of such corrections is itself a matter of conscious impressions, for instance impressions that inform us about statistics, which help us correct wrong ones. The ultimate justification for our beliefs is still our experience. And this is indeed how we improve our knowledge of the world: new impressions help update and correct old ones, which in turn makes us form better ones, i.e. impressions that represent the world more accurately.
That our knowledge at bottom rests on experience is also not to say that our knowledge rests on a basis of mere assumptions. A good analogy, I believe, is our knowledge of fundamental physical constants, which are also in some sense primitive, in that they are measured rather than derived from something else. We have no deeper justification for believing what the values of these constants are than our measurements, yet this is clearly distinct from merely assuming these values. Similarly, I would argue that we observe — “measure”, if you will — the fact that we are conscious and that two plus two is four; we do not merely assume this (there is clearly a difference: to arbitrarily assume your friend is in the same room as you is quite distinct from seeing that your friend is in the same room as you). And as in the case of the measurement of fundamental physical constants, direct measurements in consciousness can of course be erroneous, yet when we consistently measure the same result time and time again by running the same experiment, we do seem reasonably justified — inductively, as always — in believing the validity of the measurement.
That our conscious impressions are what our beliefs ultimately rest upon may seem somewhat weak and unsatisfying, yet only if we fail to keep in mind that conscious impressions are in fact all we ever deal in when it comes to our knowledge. This includes the sense that conscious impressions constitute a poor foundation for knowledge: this sense is itself just another appearance in consciousness, resting on the exact foundation it purportedly doubts. And if a statement like “I believe this because it seems that way in light of what I experience” sounds like a weak foundation for knowledge, this, I believe, is mainly because we usually only use this kind of language when it comes to matters we are uncertain about, such as immediate unexamined impressions. In truth, however, this “it is what seems true in light of my experience” is in fact what we always do, regardless of our degree of certainty. One’s knowledge of textbook information is also “just” another conscious impression.
What we do when we model the world is to represent its features with the different colors of the palette of consciousness. Indeed, this is all we ever can do: consciousness is all we ever know, and hence its colors are all we ever can model and represent the world with at the level of our knowledge.
One can fairly consider this account of knowledge a positivist one, although one that is of a distinctly phenomenological and common sensical sort. For given that consciousness is all we ever know, it is obvious that all facts we know are known via a composition of the various states of consciousness available to us, including the set of facts about the “external world” that can be detected and represented with our conscious minds (and things that fall outside of what we can detect with our conscious minds are obviously the things we cannot know).
So although science is often considered beyond unification, and that universal features shared by all sciences seem to have been deemed non-existent by common consensus, it remains trivially true, to me at least, that all forms of knowledge, whether we deem them “scientific” or not, are known in consciousness, and hence that all our knowledge is at least united by this common feature. In a nutshell, our knowledge of the world is a matter of phenomenological models that appear consistent with phenomenologically observed data. And, again, this “appearing consistent with” — or “seeming right” in light of — all the data is, as a mater of justification, ultimately all we have. This, I submit, does not only apply to science in its usual narrow conception, but to reason in general. For instance, this is also how we (ideally) assess the plausibility of different views in, say, ethics and epistemology: by weighing the data, including arguments and counter-arguments, and assessing what seems reasonable in light of it all (and here it is worth being mindful of the fact that genes seem to play a significant role when it comes to what “seems reasonable”, also in the realm of ethics and politics, and hence to be intensely skeptical of the “immediate seemings” of one’s crude intuitions, and to probe them deeper).
In this way, this account of knowledge and reason actually breaks down the usual empiricism-rationalism dichotomy: all processes of thought and reasoning are also phenomenally observed sensations, and hence not something different from “observations.” They are indeed themselves impressions — more doubtable data — that influence our view and assessment of the world. Rationalism, as in logical reasoning, is just another mode of empiricism and experiment, one that has strengths and weaknesses like all other “experimental devices”.
It is worth noting that this account of our knowledge, and reason more generally, does not amount to mere Bayesianism in any usual sense. For while Bayesian updating surely shares this general feature of being a matter of updating and estimating degrees of certainty based on all available evidence, and while much of our own updating is overtly Bayesian — for instance, many of us have made updates in our views based on formal Bayesian calculations — there is much more to our knowledge and our updating of our beliefs than mere formal calculations with numerical probabilities. Not all available evidence is represented, or even representable, as numerical probabilities; for a person who does not know what it is like to experience, say, sounds and sights, no amount of formal Bayesian calculations is going to shed light on the matter. One must experience these things to know what they are like. Bayesian updating is merely the formal special case of the more general inductive method of estimating what seems right in light of the doubtable data/experience we have accumulated so far.
A notion one often hears from religious scholars is that faith in religious claims is no less reasonable than belief in the facts we know from the sciences, since the latter ultimately rest on faith as well: they rest on faith in reason. Yet is this true? In a nutshell, no.
Science is the process of learning about the world by observing it. Therefore, one could argue that science rests on the assumption that we can learn about the world by observing it, which is in fact functionally equivalent to the assumption that induction is valid, since learning about the world by observing it requires that patterns that existed in the past still exist today and in the future — the core of induction.
Yet one need not even make this assumption explicitly, since the assumption that we can learn about the world by observing it is one that we cannot not make. In order to even express the belief that one cannot learn from experience of the world, one has already learned from such experience, the experience of one’s own belief. (This inevitability makes it just like the assumption that at least some aspects of memory can be trusted, which is in fact also an equivalent proposition: that we can learn about the world by observing it requires that at least some aspects of our memory is reliable, and for our memory to contain reliable information about the world, it must be possible to learn about the world by observing it.)
Thus, we all implicitly “assume” that we can learn about the world by observing it, whether we are religious or not, and hence making this inescapable “assumption” cannot meaningfully be called a leap of faith. Rather, it is an inescapable fact (one that all other facts rest upon), as there is no intelligible alternative (indeed, the very possibility of intelligibility of any kind rests on learning from observation in the first place, as claims cannot be deemed (un)intelligible if they cannot be learned in the first place). This makes it wholly unlike actual leaps of faith, i.e. believing in things, such as supernatural events, without supporting evidence. The latter is by no means inescapable.
Indeed, claims about some things being a matter of faith only make sense in a context where we have already made “the leap of faith” of accepting that we can learn about the world by observing it, since whether a claim rests on faith is a matter of whether there exists evidence to support it. And all evaluations of evidence must take place in a realm where we have already assumed the relevance of evidence for propositions about the world — i.e. already made the inevitable “assumption” whose status was in question. In other words, in order to assess whether or not something is a matter of faith, we must “assume” the relevance of evidence in the first place; we must accept that we should go with what seems right in light of the doubtable data/experience we have accumulated so far.
One may object that science rests on much more specific assumptions than merely the possibility of learning about the world by observing it, yet, ideally, this should not be the case. For while it is true that specific methodologies have emerged in the sciences over time, the process of science most generally — that is, learning about the world by observing it — is not committed to any specific methodology in principle, which makes all specific methodologies open for revision. If certain methods are shown to be seriously flawed, as has happened before, these methods should be discarded or updated. And this is indeed how the methods we see employed in the sciences today have developed. Placebo-controlled studies and double blind experiments were not assumed by faith to be the way to “do science” from the outset. Rather, these and other sensible methods of discovery were themselves discovered over years of trial-and-error.
Thus, what works best, both when it comes to theories and methods, is itself to be settled with observation and examination, not faith. Based on the fundamental principle of learning from observation, science continually refines its own method. In this way, the process of observing and learning about the world is a self-correcting and self-optimizing one.
As noted earlier, inductive reasoning has shown us that we have good reason to maintain humility about our beliefs. We know that our memory is fallible. As mentioned above, even mathematical proofs held to be valid by many have turned out to be wrong, and this risk of fallibility not only pertains to the logical deductions made by others, but also to those made by ourselves — the appearance that a logical deduction is valid can turn out to be wrong upon closer examination. It has happened before.
So it seems that we should maintain at least some degree of doubt even when it comes to logical deductions that we seem to have reason to be completely certain of, which is not to say that it is reasonable to have more than a negligible degree of such doubt in most cases.
Yet the above-mentioned doubts merely amount to epistemological doubts, doubts about whether our faculties of reasoning accurately track the deeper patterns of the world. We could also have doubts of a deeper ontological nature, namely about the stability of those patterns themselves. For instance, will the laws of physics as we know them apply tomorrow? What about logico-mathematical truths?
Do such questions even make sense? After all, don’t questions concerning what happens tomorrow, and hence rest on the concept of time, already presuppose some basic laws of physics, or at least some elements from the physical framework as we know it? And doesn’t the meaningfulness of doubts concerning whether our logical framework will at all apply tomorrow also itself rest on the validity of that very framework, e.g. that things are either the case or not the case? After all, all talk of whether something applies or not — is true or not — already takes place in the realm of, and therefore presupposes the sensibility of, logical thought. So what does it even mean to say that this framework might no longer apply when the very coherence of “applying” rests on this framework? It seems self-refuting.
It does. Yet even so, we do seem to have reason to maintain at least some degree of humility about these propositions, one reason being the aforementioned “epistemological doubt” — we know our memory is not entirely reliable, and hence we should admit of the possibility that deductions of the sort made above have a small risk of being wrong. Indeed, this argument for the sensibility of (at least a small amount of) doubt seems to pertain to all arguments, including itself (and also the most undoubtable of ethical positions we may hold).
Second, certain drastic changes, such as changes in certain otherwise lawful physical patterns, do not seem inconceivable; indeed, some cosmological theories predict such changes. Therefore, the claim that at least some apparently solid facts about the world may suddenly change cannot be ruled out deductively, it seems. Might the very fabric of existence suddenly change in radically unexpected ways, thereby perhaps altering physical and mathematical truths as we know them? (Again, on a physicalist view of the world, physics and mathematics cannot be separated, which means that what we may call the uniformity of mathematics depends on at least some degree of uniformity of [what we consider] physics). It seems extremely unlikely, but we cannot exclude it with total certainty.
Lastly, it also seems conceivable that we could have new experiences — on a sufficiently exotic drug, for instance — that would suddenly make the so far inconceivable seem conceivable, and thereby make apparently valid deductions and brute facts appear invalid and untrue. Again, the only justification we have for believing what we believe is, ultimately, that “it just seems true.” And while it may be inconceivable to imagine, say, that mathematical truths could suddenly change, it does not, strangely enough, seem inconceivable that such an apparently inconceivable claim could seem conceivable in a radically different state of mind. And if it can seem right in another state of mind, how can we maintain absolute certainty that that state of mind is more wrong than our own present one is? It seems we can’t.
In sum, it seems that even when it comes to the most outrageous of claims, claims we cannot even make any sense of, some small degree of uncertainty about their status still seems in place, although the appropriate degree may be very small indeed. Everything can reasonably be doubted to some degree. Or so it seems.
[A small side note: In terms of practical implications, this small window of doubt might help one soften up painful certainties, such as certainty in fatalism. For while it might be tempting to some to think about the world as being an unalterable multi-dimensional structure that we cannot change in any strong sense, one must admit that this view could in fact be wrong, and hence that trying to change the world for the better indeed might have some chance of making a difference even in a very strong sense. Either way, it seems like one does not lose anything by trying one’s best.]
Our conscious experience seems to represent a world “out there” that is independent of our own minds. But how do we know this representation is at all accurate? How do we know the truth is not rather some well-known skeptical conjecture — for instance, that our experience is all a dream or a computer simulation?
I think there is a lot to be said against skepticism of this sort, the most important one being that it is inconsistent. Knowledge of dreams and simulations is itself found in our experience, and hence to consistently doubt the validity of our experience requires us to doubt the validity — i.e. the meaningfulness and sensibility — of these notions themselves. Yet in our entertainment of skepticism of this sort, these notions themselves are somehow exempt from skepticism. They stand beyond scrutiny, while virtually all other appearances we know of, and all other beliefs we hold, do not.
What can justify such inconsistent skepticism? Nothing, as far as I can see, especially given that claims of the sort that all we experience could be a dream or a computer simulation seem extremely dubious to say the least. Take the claim that our entire experience is a dream. Does anything we know of actually suggest this in the slightest? Not to my knowledge. The state of our consciousness in our dreams is radically different from our waking state. Indeed, within a dream it is even possible to realize that one is dreaming, and to explore one’s consciousness in that state, as many of us have tried; something similar never happens in our waking state. The only thing that remotely hints that our experience could be a dream is an argument from analogy: Given that our experiences in dreams can seem to convincingly represent the world, yet still turn out to be mere dreams, could our waking state that seems to convincingly represent the world not be a mere dream too?
If dreams were anything like our waking state, this would indeed seem reasonable. Yet the truth is that they are not.
This “the appearance is different” fact may seem to say precious little, yet only if we miss the significance of differences in appearances. By analogy, imagine that you are on holiday in Istanbul. You remember planning the journey, traveling there, being there for the past five days, and presently you are watching the Sultan Ahmed Mosque while feeling the unbearable summer heat. Now, how do you know that you are not, in fact, in Oslo? Well, just about every single appearance in your consciousness suggests that you are not, and hence you are not in much doubt. And reasonably so.
Yet is this really analogous to the difference in appearance between our dreaming and waking state? Not quite, as I would argue that this analogy actually fails to do justice to the actual difference between our waking and dreaming state, a difference that is far greater than the difference between a waking experience of Istanbul and Oslo respectively. Hence, I would argue that we have no more reason to suspect that our present experience is a dream anymore than we have reason to suspect that we, say, live in a completely different city than we thought. Yes, the world, including the basis of our experience, may well turn out to be very different from what we expect in many ways. Yet the specific claim that our experience of the world is a dream — something that takes place in the brain of a sleeping person — is, I would argue, extraordinarily implausible in light of all that we know, especially the enormous difference between the character of our waking and dreaming state.
Even stronger skepticism seems justified in the case of the claim that all we experience is a computer simulation, one reason being that we simply have no evidence that computer simulations can mediate conscious minds like our own in the first place — at least no more evidence than we have for believing that, say, tomatoes can (indeed, tomatoes are in many ways far more similar to human brains in physical terms than computers are). Another good reason to be intensely skeptical is that so-called ancestor simulations are in fact impossible.
A similar degree of skepticism seems apt in the case of the claim that all we experience is the result of a brain in a vat. According to what we know from fields such as physics, chemistry and biology, there is, as Daniel Dennett shows in Consciousness Explained, no way to produce an experience like ours by stimulating a brain in a vat. And if we dismiss such knowledge, we might as well dismiss our belief in the existence of brains in the first place — itself a belief about physics and biology that we do not seem justified in granting a more privileged status than we do other solid facts found in the canons of physics and biology.
And since we are dealing with various skeptical hypotheses, it seems worth pointing out that skepticism about the existence of other minds is on no firmer ground, as it indeed has the exact same epistemic status as doubting the existence of brains does. The existence of brains is only known through our own conscious experience, an experience that, according to what is known in that experience itself, is mediated by a physical brain. Based on this, we draw an inferential arrow that connects our experience to physical brains. We go from experience to physical brain. Therefore, drawing an arrow from brain to experience — whether one’s own or that of others — which is really just to draw the exact same arrow in the opposite direction, is no more problematic. Conclusively, doubting the existence of other minds is really no more reasonable than doubting the existence of one’s own brain.
One may argue that there is a difference when we are talking about different brains from our own, yet one could say the same about one’s future or past brain, which is also different from one’s present one. If one believes that one’s own future brain will be conscious — a brain that is similar yet still different from one’s present one — then how can one maintain that the brains of other beings that are also similar, yet different from one’s present brain are not conscious as well? Similarly, if one believes that one’s ever-changing brain has mediated conscious states in the past, why should the different brain states of others not mediate consciousness as well? To believe they do not is simply inconsistent.
The problem with skeptical conjectures such as the dream and the simulation hypothesis is, again, that they hold that virtually all the appearances we know from our experience are false, yet the appearance of the possibility that the basis of our experience is something radically different from what we thought — yet still something that we know of from our experience, such as the notion of a dream or a simulation — is not subjected to such doubt at all (in spite of an absence of good reasons for believing in such possibilities in the first place). In other words, these conjectures rest on arbitrarily constrained skepticism.
More than that, these skeptical hypotheses also seem to undermine themselves. For if we accept the premise that our experience indeed is a simulation or a dream, what reason do we have for believing that the worldview we are able to draw from it, including any conclusion about dreams and simulations, has any validity beyond our own simulation or dream? If we are living in a dream or a simulation, it seems that what we think we can say with any certainty about the world, including about dreams and simulations, is likely to be wrong to an unimaginable degree, since it is all based on pure dream or simulation itself. Conclusively, accepting any of these conjectures seems to force us to doubt them strongly, even to make it difficult to make sense of them. And being self-undermining is not a virtue of a conjecture.
Again, what we do when we assess the truth of a proposition is, ideally, to judge its plausibility in light of the totality of what we know. And this is exactly what we fail to do when we deem skeptical conjectures of this sort likely. We go with peculiar arguments, propositions, and concepts, and then doubt everything else, thereby ignoring that the meaning, even the coherence, of these arguments and concepts rest, in subtle and not so subtle ways, on all this other knowledge that they supposedly imply we should doubt, thereby inadvertently destroying their own foundations.
Keeping the totality of our knowledge in view and applying our skepticism consistently leads us, I maintain, to a relatively common sense view of the world, at least when it comes to the basics of the basis of our experience. What we know about the world hints that our experience is mediated by a biological brain just as strongly as our experience hints that the Earth is round; nothing really suggests it is not. In my view, we have no good reason to believe that what we experience is, or even could be, a dream or a simulation, while a very great deal — including consistent thinking based on what we know — strongly suggests it is not.
This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2016/11/induction-is-all-we-got.html
I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.
The statement that gave rise to the conversation was apparently this:
> Intelligence is the only advantage we have over lions.
My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.
I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.
There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you’re also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)
The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.
And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.
For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allow us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.
Magnus Vinding seems to think that because humans do all the cool stuff “only because of tools,” innate intelligence differences are not very consequential.
I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.
It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.
This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.
I wrote a section on Von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John Von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting Von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:
“If you say why not bomb them tomorrow, I say why not today? If you say today at five o’ clock, I say why not one o’ clock?”
To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).
One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it’s a collection of modules that happen to correlate in humans for some reason that I don’t yet understand.
I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)
> The “threshold” between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).
So there’s a possible world full of lion-tailored tools where the lions are beating our asses all day?
Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.
Further down you acknowledge that the difference is “or maybe tailored to individuals with superior cognitive ability” – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.
I suspect David Pearce might say that that’s a parochially male thing to say. One could also say that the whole point of cognitive abilities is to make others feel good — a drive/task that has no doubt played a large role both for human survival and the increase in our cognitive abilities and goal-achieving abilities in general, arguably just as great as “making the most out of tool-shaped parts of the environment”.
Second, I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.
Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.
I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.
Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual employment of the word “intelligence” in general, in my view.
My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).
First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.
Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant with respect to the task of taking over the world? And how might they be likely to be created? Relevant questions to clarify, it seems to me.
Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable (closer to notional limits of “capabilities”) the more technologically advanced society is, that there would be more technology learned about and mastered by others to learn about and master in order to take over, and finally that society presumably will learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it. [Possibly common lament of AI2100: “Damn, if only I’d been around in the 1990s when people knew nothing while listening to Aqua.”]
This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2017/07/response-to-conversation-on-intelligence.html