Narrative Self-Deception: The Ultimate Elephant in the Brain?

the elephant in the brain, n. An important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

The Elephant in the Brain is an informative and well-written book, co-authored by Kevin Simler and Robin Hanson. It explains why much of our behavior is driven by unflattering, hidden motives, as well as why our minds are built to be unaware of these motives. In short: because a mind that is ignorant about what drives it and how it works is often more capable of achieving the aims it was built to achieve.

Beyond that, the book also seeks to apply this knowledge to shed some light on many of our social institutions to show that they are often not mostly about what we think they are. Rather than being about high-minded ideals and other pretty things that we like to say they are about, our institutions often serve much less pretty, more status-driven purposes, such as showing off in various ways, as well as to help us better get by in a tough world (for instance, the authors argue that religion in large part serves to bind communities together, and in this way can help bring about better life outcomes for believers).

All in all, I think The Elephant in the Brain provides a strong case for supplementing one’s mental toolkit with a new, important tool, namely to continuously ask: how might my mind skillfully be avoiding confrontation with ugly truths about myself that I would prefer not to face? And how might such unflattering truths explain aspects of our public institutions and public life in general?

This is an important lesson, I think, and it makes the book more than worth reading. At the same time, I cannot help but feel that the book ultimately falls short when it comes to putting this tool to proper use. For the main critique that came to my mind while reading the book was that it seemed to ignore the biggest elephant in the brain by far — the elephant I suspect we would all prefer to ignore the most — and hence it failed, in my view, to take a truly deep and courageous look at the human condition. In fact, the book even seemed be a mouthpiece for this great elephant.

The great elephant I have in mind here is a tacitly embraced sentiment that goes something like: life is great, and we are accomplishing something worthwhile. As the authors write: “[…] life, for must of us, is pretty good.” (p. 11). And they end the book on a similar note:

In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

This seems to implicitly assume that what humans have managed to achieve, such as cooperating (i.e. two superpowers with nuclear weapons pointed at each other competing) their way to the moon, has been worthwhile all things considered. Might this, however, be a flippant elephant talking — rather than, say, a conclusion derived via a serious, scholarly analysis of our condition?

As a meta-observation, I would note that the fact that people often get offended and become defensive when one even just questions the value of our condition — and sometimes also accuse the one raising the question of having a mental illness — suggests that we may indeed be disturbing a great elephant here: something we would strongly prefer not to think too deeply about. (For the record, with respect to mental health, I think one can be among the happiest, most mentally healthy people on the planet and still think that a sober examination of the value of our condition yields a negative answer, although it may require some disciplined resistance against the pulls of a strong elephant.)

It is important to note here that one should not confuse the cynicism required for honest exploration of the human condition with misanthropy, as Simler and Hanson themselves are careful to point out:

The line between cynicism and misanthropy—between thinking ill of human motives and thinking ill of humans—is often blurry. So we want readers to understand that although we may often be skeptical of human motives, we love human beings. (Indeed, many of our best friends are human!) […] All in all, we doubt an honest exploration will detract much from our affection for [humans]. (p. 13)

Similarly, an honest and hard-nosed effort to assess the value of human life and the human endeavor need not lead us to have any less affection and compassion for humans. Indeed, it might lead us to have much more of both in many ways.

Is Life “Pretty Good”?

With respect to Simler’s and Hanson’s claim that “”[…] life, for must of us, is pretty good”, it can be disputed that this is indeed the case. According to the 2017 World Happiness Report, a significant plurality of people rated their life satisfaction at five on a scale from zero to ten, which arguably does not translate to being “pretty good”. Indeed, one can argue that the scale employed in this report is biased, in that it does not allow for a negative evaluation of life. And one may further argue that if this scale instead ranged from minus five to plus five (i.e. if one transposed this zero-to-ten scale so as to make it symmetrical around zero), it may be that a plurality would rate their lives at zero. That is, after all, where the plurality would lie if one were to make this transposition on the existing data measured along the zero-to-ten scale (although it seems likely that people would have rated their life satisfaction differently if the scale had been constructed in this symmetrical way).

But even if we were to concede that most people say that their lives are pretty good, one can still reasonably question whether most people’s lives indeed are pretty good, and not least reasonably question whether such reports imply that the human condition is worthwhile in a broader sense.

Narrative Self-Deception: Is Life As Good As We Think?

Just as it is possible for us to be wrong about our own motives, as Simler and Hanson convincingly argue, could it be that we can also be wrong about how good our lives are? And, furthermore, could it be that we not only can be wrong but that most of us in fact are wrong about it most of the time? This is indeed what some philosophers argue, seemingly supported by psychological evidence.

One philosopher who has argued along these lines is Thomas Metzinger. In his essay “Suffering“, Metzinger reports on a pilot study he conducted in which students were asked at random times via their cell phones whether they would relive the experience they had just before their phone vibrated. The results were that, on average, students reported that their experience was not worth reliving 72 percent of the time. Metzinger uses this data, which he admits does not count as significant, as a starting point for a discussion on how our grosser narrative about the quality of our lives might be out of touch with the reality of our felt, moment-to-moment experience:

If, on the finest introspective level of phenomenological granularity that is functionally available to it, a self-conscious system would discover too many negatively valenced moments, then this discovery might paralyse it and prevent it from procreating. If the human organism would not repeat most individual conscious moments if it had any choice, then the logic of psychological evolution mandates concealment of the fact from the self-modelling system caught on the hedonic treadmill. It would be an advantage if insights into the deep structure of its own mind – insights of the type just sketched – were not reflected in its conscious self-model too strongly, and if it suffered from a robust version of optimism bias. Perhaps it is exactly the main function of the human self-model’s higher levels to drive the organism continuously forward, to generate a functionally adequate form of self-deception glossing over everyday life’s ugly details by developing a grandiose and unrealistically optimistic inner story – a “narrative self-model” with which we can identify? (pp. 6-7)

Metzinger continues to conjecture that we might be subject to what he calls “narrative self-deception” — a self-distracting strategy that keeps us from getting a realistic view of the quality and prospects of our lives:

[…] a strategy of flexible, dynamic self­-representation across a hierarchy of timescales could have a causal effect in continuously remotivating the self-­conscious organism, systematically distracting it from the potential insight that the life of an anti-­entropic system is one big uphill battle, a strenuous affair with minimal prospect of enduring success. Let us call this speculative hypothesis “narrative self­-deception”. (p. 7)

If this holds true, such self-deception would seem to more than satisfy the definition of an elephant in the brain in Simler and Hanson’s sense: “an important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

To paraphrase Metzinger: the mere fact that we find life to be “pretty good” when we evaluate it all from the vantage point of a single moment does not mean that we in fact find most of our experiences “pretty good”, or indeed even worth (re)living most of the time, moment-to-moment. Our single-moment evaluations of the quality of the whole thing may well tend to be gross, self-deceived overestimates.

Another philosopher who makes a similar case is David Benatar, who in his book Better Never to Have Been argues that we tend to overestimate the quality of our lives due to well-documented psychological biases:

The first, most general and most influential of these psychological phenomena is what some have called the Pollyanna Principle, a tendency towards optimism. This manifests in many ways. First, there is an inclination to recall positive rather than negative experiences. For example, when asked to recall events from throughout their lives, subjects in a number of studies listed a much greater number of positive than negative experiences. This selective recall distorts our judgement of how well our lives have gone so far. It is not only assessments of our past that are biased, but also our projections or expectations about the future. We tend to have an exaggerated view of how good things will be. The Pollyannaism typical of recall and projection is also characteristic of subjective judgements about current and overall well-being. Many studies have consistently shown that self-assessments of well-being are markedly skewed toward the positive end of the spectrum. […] Indeed, most people believe that they are better off than most others or than the average person. (pp. 64-66)

Is “Pretty Good” Good Enough?

Beyond doubting whether most people would indeed say that their lives are “pretty good”, and beyond doubting that a single moment’s assessment of one’s quality of life actually reflects this quality particularly well, one can also question whether a life that is rated as “pretty good”, even in the vast majority of moments, is indeed good enough.

This is, for example, not necessarily the case on the so-called tranquilist view of value, according to which our experiences are valuable to the extent they are absent of suffering, and hence that happiness and pleasure are valuable to the extent they chase suffering away.

Similar to Metzinger’s point about narrative self-deception, one can argue that, if the tranquilist view holds true about how we feel the value of our experience moment-to-moment (upon closer, introspective inspection), we should probably expect to be quite blind to this fact. And interesting to note in this context is it that many of the traditions which have placed the greatest emphasis on paying attention to the nature of subjective experience moment-to-moment, such as Buddhism, have converged toward a view very similar to tranquilism.

Can the Good Lives Outweigh the Bad?

One can also question the value of our condition on a more collective level, by focusing not only on a single (self-reportedly) “pretty good” life but on all individual lives. In particular, we can question whether the good lives of some, indeed even a large majority, can justify the miserable lives of others.

A story that gives many people pause on this question is Ursula K. Le Guin’s The Ones Who Walk Away from Omelas. The story is about a near-paradisiacal city in which everyone lives deeply meaningful and fulfilling lives — that is, everyone except a single child who is locked in a basement room, forced to live a life of squalor:

The child used to scream for help at night, and cry a good deal, but now it only makes a kind of whining, “eh-haa, eh-haa,” and it speaks less and less often. It is so thin there are no calves to its legs; its belly protrudes; it lives on a half-bowl of corn meal and grease a day. It is naked. Its buttocks and thighs are a mass of festered sores, as it sits in its own excrement continually.

The story’s premise is that this child must exist in this condition for the happy people of Omelas to enjoy their wonderful lives, which then raises the question of whether these wonderful lives can in any sense outweigh and justify the miserable life of this single child. Some citizens of Omelas seem to decide that this is not the case: the ones who walk away from Omelas. And many people in the real world seem to agree with this decision.

Sadly, our world is much worse than the city of Omelas on every measure. For example, in the World Happiness Report cited above, around 200 million people reported their quality of life to be in the absolute worst category. If the story of Omelas gives us pause, we should also think twice before claiming that the “pretty good” lives of some people can outweigh the self-reportedly very bad lives of these hundreds of millions of people, many of whom end up committing suicide (and again, it should be remembered that a great plurality of humanity rated their life satisfaction to be exactly in the middle of the scale, while a significant majority rated it in the middle or lower).

Rating of general life satisfaction aside, one can also reasonably question whether anything can outweigh the many instances of extreme suffering that occur every single day, something that can indeed befall anyone, regardless of one’s past self-reported life satisfaction.

Beyond that, one can also question whether the “pretty good” lives of some humans can in any sense outweigh and justify the enormous amount of suffering humanity imposes on non-human animals, including the torturous suffering we subject more than a trillion fish to each year, as well as the suffering we impose upon the tens of billions of chickens and turkeys who live out their lives under the horrific conditions of factory farming, many of whom end their lives by being boiled alive. Indeed, there is no justification for not taking humanity’s impact on non-human animals — the vast majority of sentient beings on the planet — into consideration as well when assessing the value of our condition.

 

My main purpose in this essay has not been to draw any conclusions about the value of our condition. Rather, my aim has merely been to argue that we likely have an enormous elephant in our brain that causes us to evaluate our lives, individually as well as collectively, in overoptimistic terms (though some of us perhaps do not), and to ignore the many considerations that might suggest a negative conclusion. An elephant that leads us to eagerly assume that “it’s all pretty good and worthwhile”, and to flinch away from serious, sober-minded engagement with questions concerning the value of our condition, including whether it would be better if there had been no sentient beings at all.

Moral Circle Expansion Might Increase Future Suffering

Expanding humanity’s moral circle so that it includes all sentient beings seems among the most urgent and important missions before us. And yet there is a significant risk that such greater moral inclusion might in fact end up increasing future suffering. As Brian Tomasik notes:

One might ask, “Why not just promote broader circles of compassion, without a focus on suffering?” The answer is that more compassion by itself could increase suffering. For example, most people who care about wild animals in a general sense conclude that wildlife habitats should be preserved, in part because these people aren’t focused enough on the suffering that wild animals endure. Likewise, generically caring about future digital sentience might encourage people to create as many happy digital minds as possible, even if this means also increasing the risk of digital suffering due to colonizing space. Placing special emphasis on reducing suffering is crucial for taking the right stance on many of these issues.

Indeed, many classical utilitarians do include non-human animals in their moral circle, yet they still consider it permissible, indeed in some sense morally required of us, that we bring individuals into existence so that they can live “net positive lives” and we can eat them (I have argued that this view is mistaken, almost regardless of what kind of utilitarian view one assumes). And some even seem to think that most lives on factory farms might plausibly be such “net positive lives”. A wide circle of moral consideration clearly does not guarantee an unwillingness to allow large amounts of suffering to be brought into the world.

More generally, there is a considerable number of widely subscribed ethical positions that favor bringing about larger rather than smaller populations of the beings who belong to our moral circle, at least provided that certain conditions are met in the lives of these beings. And many of these ethical positions have quite loose such conditions, which implies that they can easily permit, and even demand, the creation of a lot of suffering for the sake of some (supposedly) greater good.

Indeed, the truth is that even if we require an enormous amount of happiness (or an enormous amount of other intrinsically good things) to outweigh a given amount of suffering, this can still easily permit the creation of large amounts of suffering, as illustrated by the following consideration (quoted from the penultimate chapter of my book on effective altruism):

[…] consider the practical implications of the following two moral principles: 1) we will not allow the creation of a single instance of the worst forms of suffering […] for any amount of happiness, and 2) we will allow one day of such suffering for ten years of the most sublime happiness. What kind of future would we accept with these respective principles? Imagine a future in which we colonize space and maximize the number of sentient beings that the accessible universe can sustain over the entire course of the future, which is probably more than 10^30. Given this number of beings, and assuming these beings each live a hundred years, principle 2) above would appear to permit a space colonization that all in all creates more than 10^28 years of [extreme suffering], provided that the other states of experience are sublimely happy. This is how extreme the difference can be between principles like 1) and 2); between whether we consider suffering irredeemable or not. And notice that even if we altered the exchange rate by orders of magnitude — say, by requiring 10^15 times more sublime happiness per unit of extreme suffering than we did in principle 2) above — we would still allow an enormous amount of extreme suffering to be created; in the concrete case of requiring 10^15 times more happiness, we would allow more than 10,000 billion years of [the worst forms of suffering].

This highlights the importance of thinking deeply about which trade-offs, if any, we find acceptable with respect to the creation of suffering, including extreme suffering.

The considerations above concerning popular ethical positions that support larger future populations imply that there is a risk — a seemingly low yet still significant risk — that a more narrow moral circle may in fact lead to less future suffering for the morally excluded beings (e.g. by making efforts to bring these beings into existence, on Earth and beyond, less likely).

Implications

In spite of this risk, I still consider generic moral circle expansion quite positive in expectation. Yet it seems less positive, and arguably significantly less robust (with respect to the goal of reducing extreme suffering) than does the promotion of suffering-focused valuesAnd it seems less robust and less positive still than the twin-track strategy of focusing on both expanding our moral circle and deepening our concern for suffering. Both seem necessary yet insufficient on their own. If we deepen concern for suffering without broadening the moral circle, our deepened concern risks failing to pertain to the vast majority of sentient beings. On the other hand, if we broaden our moral circle without deepening our concern for suffering, we may end up allowing the beings within our moral circle to endure enormous amounts of suffering, including extreme suffering.

Those who seek to minimize extreme suffering should seek to avoid both these pitfalls by pursuing the twin-track approach.

The Principle of Sympathy for Intense Suffering

This essay was first published as a chapter in my book Effective Altruism: How Can We Best Help Others? which is available for free download here. The chapter that precedes it makes a general case for suffering-focused ethics, whereas this chapter argues for a particular suffering-focused view.


The ethical view I would advocate most strongly is a suffering-focused view that centers on a core principle of Sympathy for Intense Suffering, or SIS for short, which roughly holds that we should prioritize the interests of those who are, or will be, in a state of extreme suffering. In particular: that we should prioritize their interest in avoiding such suffering higher than anything else.[1]

One can say that this view takes its point of departure in classical utilitarianism, the theory that we should maximize the net sum of happiness minus suffering. Yet it questions a tacit assumption, a particular existence claim, often held in conjunction with the classical utilitarian framework, namely that for every instance of suffering, there exists some amount of happiness that can outweigh it.

This is a deeply problematic assumption, in my view. More than that, it is peculiar that classical utilitarianism seems widely believed to entail this assumption, given that (to my knowledge) none of the seminal classical utilitarians — Jeremy Bentham, John Stuart Mill, and Henry Sidgwick — ever argued for this existence claim, or even discussed it.[2] Thus, it seems that the acceptance of this assumption is no more entailed by classical utilitarianism, defined as the ethical view, or views, expressed by these utilitarian philosophers, than is its rejection.

The question of whether this assumption is reasonable ties into a deeper discussion about how to measure and weigh happiness and suffering against each other, and I think this is much less well-defined than is commonly supposed (even though the trickiness of the task is often acknowledged).[3] The problem is that we have a common sense view that goes something like the following: if a conscious subject deems some state of suffering worth experiencing in order to attain some given pleasure, then this pleasure is worth the suffering. And this common sense view may work for most of us most of the time.[4] Yet it runs into problems in cases where the subject deems their suffering so unbearable that no amount of happiness could ever outweigh it.

For what would the common sense view say in such a situation? That the suffering indeed cannot be outweighed by any pleasure? That would seem an intuitive suggestion, yet the problem is that we can also imagine the case of an experience of some pleasure that the subject, in that experience-moment, deems so great that it can outweigh even the worst forms of suffering, which leaves us with mutually incompatible value claims (although it is worth noting that one can reasonably doubt the existence of such positive states, whereas, as we shall see below, the existence of correspondingly negative experiences is a certainty).[5] How are we to evaluate these claims?

The aforementioned common sense method of evaluation has clearly broken down at this point, and is entirely silent on the matter. We are forced to appeal to another principle of evaluation. And the principle I would argue we should employ is, as hinted above, to choose to sympathize with those who are worst off — those who are experiencing intense suffering. Hence the principle of sympathy for intense suffering: we should sympathize with, and prioritize, the evaluations of those subjects who deem their suffering unoutweighable, even if only for a brief experience-moment, and thus give total priority to helping these subjects. More precisely, we should minimize the amount of such experience-moments of extreme suffering.[6] That, on this account of value, is the greatest help we can do for others.

This principle actually seems to have a lot of support from common sense and “common wisdom”. For example, imagine two children are offered to ride a roller coaster, one of whom would find the ride very pleasant, while the other child would find it very unpleasant, and imagine, furthermore, that the only two options available are that they either both ride or neither of them ride (and if neither of them ride, they are both perfectly fine).[7] Whose interests should we sympathize with and favor? Common sense would appear to favor the child who would not want to take the ride. The mere pleasure of the “ride-positive” child does not justify a violation of the interest of the other child not to suffer a very unpleasant experience. The interest in not enduring such suffering seems far more fundamental, and hence to have ethical primacy, compared to the relatively trivial and frivolous interest of having a very pleasant experience.[8]

Arguably, common sense even suggests the same in the case where there are many more children who would find the ride very pleasant, while still only one child who would find it very unpleasant (provided, again, that the children will all be perfectly fine if they do not ride). Indeed, I believe a significant fraction of people would say the same no matter how many such “ride-positive” children we put on the scale: it would still be wrong to give them the ride at the cost of forcing the “ride-negative” child to undergo the very unpleasant experience.[9]

And yet the suffering in this example — a very unpleasant experience on a roller coaster — can hardly be said to count as remotely extreme, much less an instance of the worst forms of suffering; the forms of suffering that constitute the strongest, and in my view overwhelming, case for the principle of sympathy for intense suffering. Such intense suffering, even if balanced against the most intense forms of pleasure imaginable, only demands even stronger relative sympathy and priority. However bad we may consider the imposition of a very unpleasant experience for the sake of a very pleasant one, the imposition of extreme suffering for the sake of extreme pleasure must be deemed far worse.

The Horrendous Support for SIS

The worst forms of suffering are so terrible that merely thinking about them for a brief moment can leave the average sympathetic person in a state of horror and darkness for a good while, and therefore, quite naturally, we strongly prefer not to contemplate these things. Yet if we are to make sure that we have our priorities right, and that our views about what matters most in this world are as well-considered as possible, then we cannot shy away from the task of contemplating and trying to appreciate the disvalue of these worst of horrors. This is no easy task, and not just because we are reluctant to think about the issue in the first place, but also because it is difficult to gain anything close to a true appreciation of the reality in question. As David Pearce put it:

It’s easy to convince oneself that things can’t really be that bad, that the horror invoked is being overblown, that what is going on elsewhere in space-time is somehow less real than this here-and-now, or that the good in the world somehow offsets the bad. Yet however vividly one thinks one can imagine what agony, torture or suicidal despair must be like, the reality is inconceivably worse. Hazy images of Orwell’s ‘Room 101’ barely hint at what I’m talking about. The force of ‘inconceivably’ is itself largely inconceivable here.[10]

Nonetheless, we can still gain at least some, admittedly rather limited, appreciation by considering some real-world examples of extreme suffering (what follows are examples of an extremely unpleasant character that may be triggering and traumatizing).

One such example is the tragic fate of the Japanese girl Junko Furuta who was kidnapped in 1988, at the age of 16, by four teenage boys. According to their own trial statements, the boys raped her hundreds of times; “inserted foreign objects, such as iron bars, scissors and skewers into her vagina and anus, rendering her unable to defecate and urinate properly”; “beat her several times with golf clubs, bamboo sticks and iron rods”; “used her as a punching bag by hanging her body from the ceiling”; “dropped barbells onto her stomach several times”; “set fireworks into her anus, vagina, mouth and ear”; “burnt her vagina and clitoris with cigarettes and lighters”; “tore off her left nipple with pliers”; and more. Eventually, she was no longer able to move from the ground, and she repeatedly begged the boys to kill her, which they eventually did, after 44 days.[11]

An example of extreme suffering that is much more common, indeed something that happens countless times every single day, is being eaten alive, a process that can sometimes last several hours with the victim still fully conscious of being devoured, muscle by muscle, organ by organ. A harrowing example of such a death that was caught on camera (see the following note) involved a baboon tearing apart the hind legs of a baby gazelle and eating this poor individual who remained conscious for longer than one would have thought and hoped possible.[12] A few minutes of a much more protracted such painful and horrifying death can be seen via the link in the following note (lions eating a baby elephant alive).[13] And a similar, yet quicker death of a man can be seen via the link in the following note.[14] Tragically, the man’s wife and two children were sitting in a car next to him while it happened, yet they were unable to help him, and knowing this probably made the man’s experience even more horrible, which ties into a point made by Simon Knutsson:

Sometimes when the badness or moral importance of torture is discussed, it is described in terms of different stimuli that cause tissue damage, such as burning, cutting or stretching. But one should also remember different ways to make someone feel bad, and different kinds of bad feelings, which can be combined to make one’s overall experience even more terrible. It is arguably the overall unpleasantness of one’s experience that matters most in this context.[15]

After giving a real-world example with several layers of extreme cruelty and suffering combined, Knutsson goes on to write:

Although this example is terrible, one can imagine how it could be worse if more types of violence and bad feelings were added to the mix. To take another example: [Brian] Tomasik often talks about the Brazen bull as a particularly bad form of torture. The victim is locked inside a metal bull, a fire is lit underneath the bull and the victim is fried to death. It is easy to imagine how this can be made worse. For example, inject the victim with chemicals that amplify pain and block the body’s natural pain inhibitors, and put her loved ones in the bull so that when she is being fried, she also sees her loved ones being fried. One can imagine further combinations that make it even worse. Talking only of stimuli such as burning almost trivializes how bad experiences can be.[16]

Another example of extreme suffering is what happened to Dax Cowart. In 1973, at the age of 25, Dax went on a trip with his father to visit land that he considered buying. Unfortunately, due to a pipeline leak, the air over the land was filled with propane gas, which is highly flammable when combined with oxygen. As they started their car, the propane ignited, and the two men found themselves in a burning inferno. Dax’s father died, and Dax himself had much of his hands, eyes, and ears burned away; two thirds of his skin was severely burned.[17]

The case of Dax has since become quite famous, not only, or even mainly, because of the extreme horror he experienced during this explosion, but because of the ethical issues raised by his treatment, which turned out to be about as torturous as the explosion itself. For Dax himself repeatedly said, immediately after the explosion as well as for months later, that he wanted to die more than anything else, and that he did not want to be subjected to any treatment that would keep him alive. Nonetheless, he was forcibly treated for a period of ten months, during which he tried to take his life several times.
Since then, Dax has managed to recover and live what he considers a happy life — he successfully sued the oil company responsible for the pipeline leak, which left him financially secure; he earned a law degree; and got married. Yet even so, he still wishes that he had been killed rather than treated. In Dax’s own view, no happiness could ever compensate for what he went through.[18]

This kind of evaluation is exactly what the ethical principle advocated here centers on, and what the principle amounts to is simply a refusal to claim that Dax’s evaluation, or any other like it, is wrong. It maintains that we should not allow the occurrence of such extreme horrors for the sake of any intrinsic good, and hence that we should prioritize alleviating and preventing them over anything else.[19]

One may object that the examples above do not all comprise clear cases where the suffering subject deems their suffering so bad that nothing could ever outweigh it. And more generally, one may object that there can exist intense suffering that is not necessarily deemed so bad that nothing could outweigh it, either because the subject is not able to make such an evaluation, or because the subject just chooses not to evaluate it that way. What would the principle of sympathy for intense suffering say about such cases? It would say the following: in cases where the suffering is intense, yet the sufferers choose not to deem it so bad that nothing could outweigh it (we may call this “red suffering”), we should prioritize reducing suffering of the kind that would be deemed unoutweighable (what we may call “black suffering”). And in cases where the sufferers cannot make such evaluations, we may say that suffering at a level of intensity comparable to the suffering deemed unoutweighable by subjects who can make such evaluations should also be considered unoutweighable, and its prevention should be prioritized over all less intense forms of suffering.

Yet this is, of course, all rather theoretical. In practice, even when subjects do have the ability to evaluate their experience, we will, as outside observers, usually not be able to know what their evaluation is — for instance, how someone who is burning alive might evaluate their experience. In practice, all we can do is make informed assessments of what counts as suffering so intense that such an evaluation of unoutweighability would likely be made by the sufferer, assuming an idealized situation where the sufferer is able to evaluate the disvalue of the experience.[20]

 

I shall spare the reader from further examples of extreme suffering here in the text, and instead refer to sources, found in the following note, that contain additional cases that are worth considering in order to gain a greater appreciation of extreme suffering and its disvalue.[21] And the crucial question we must ask ourselves in relation to these examples — which, as hinted by the quote above by Knutsson, are probably far from the worst possible manifestations of suffering — is whether the creation of happiness or any other intrinsic good could ever justify the creation, or the failure to prevent, suffering this bad and worse. If not, this implies that our priority should not be to create happiness or other intrinsic goods, but instead to prevent extreme suffering of this kind above anything else, regardless of where in time and space it may risk emerging.

Objections to SIS

Among the objections against this view I can think of, the strongest, at least at first sight, is the sentiment: but what about that which is most precious in your life? What about the person who is most dear to you? If anything stands a chance of outweighing the disvalue of extreme suffering, surely this is it. In more specific terms: does it not seem plausible to claim that, say, saving the most precious person in one’s life could be worth an instance of the very worst form of suffering?

Yet one has to be careful about how this question is construed. If what we mean by “saving” is that we save them from extreme suffering, then we are measuring extreme suffering against extreme suffering, and hence we have not pointed to a rival candidate for outweighing the superlative disvalue of extreme suffering. Therefore, if we are to point to such a candidate, “saving” must here mean something that does not itself involve extreme suffering, and, if we wish to claim that there is something wholly different from the reduction of suffering that can be put on the scale, it should preferably involve no suffering at all. So the choice we should consider is rather one between 1) the mixed bargain of an instance of the very worst form of suffering, i.e. black suffering, and the continued existence of the most precious person one knows, or 2) the painless discontinuation of the existence of this person, yet without any ensuing suffering for others or oneself.

Now, when phrased in this way, choosing 1) may not sound all that bad to us, especially if we do not know the one who will suffer. Yet this would be cheating — nothing but an appeal to our faulty and all too partial moral intuitions. It clearly betrays the principle of impartiality,[22] according to which it should not matter whom the suffering in question is imposed upon; it should be considered equally disvaluable regardless.[23] Thus, we may equivalently phrase the choice above as being between 1) the continued existence of the most precious person one knows of, yet at the price that this being has to experience a state of extreme suffering, a state this person deems so bad that, according to them, it could never be outweighed by any intrinsic good, or 2) the discontinuation of the existence of this being without any ensuing suffering. When phrased in this way, it actually seems clearer to me than ever that 2) is the superior choice, and that we should adopt the principle of sympathy for intense suffering as our highest ethical principle. For how could one possibly justify imposing such extreme, and in the mind of the subject unoutweighable, suffering upon the most precious person one knows, suffering that this person would, at least in that moment, rather die than continue to experience? In this way, for me at least, it is no overstatement to say that this objection against the principle of sympathy for intense suffering, when considered more carefully, actually ends up being one of the strongest cases for it.

Another seemingly compelling objection would be to question whether an arbitrarily long duration of intense, yet, according to the subject, not unoutweighable suffering, i.e. red suffering, is really less bad than even just a split second of suffering that is deemed unoutweighable, i.e. black suffering. Counter-intuitively, my response, at least in this theoretical case, would be to bite the bullet and say “yes”. After all, if we take the subject’s own reports as the highest arbiter of the (dis)value of experiential states, then the black suffering cannot be outweighed by anything, whereas the red suffering can. Also, it should be noted that this thought experiment likely conflicts with quite a few sensible, real-world intuitions we have. For instance, in the real world, it seems highly likely that a subject who experiences extreme suffering for a long time will eventually find it unbearable, and say that nothing can outweigh it, contrary to the hypothetical case we are considering. Another such confounding real-world intuition might be one that reminds us that most things in the real world tend to fluctuate in some way, and hence, intuitively, it seems like there is a significant risk that a person who endures red suffering for a long time will also experience black suffering (again contrary to the actual conditions of the thought experiment), and perhaps even experience a lot of it, in which case this indeed is worse than a mere split second of black suffering on any account.

Partly for this latter reason, my response would also be different in practice. For again, in the real world, we are never able to determine the full consequences of our actions, and nor are we usually able to determine from the outside whether someone is experiencing red or black suffering, which implies that we have to take uncertainty and risks into account. Also because, even if we did know that a subject deemed some state of suffering as “merely” red at one point, this would not imply that their suffering at other moments where they appear to be in a similar state will also be deemed red as opposed to black. For in the real world it is indeed to be expected that significant fluctuations will occur, as well as that “the same suffering”, in one sense at least, will be felt as worse over time. Indeed, if the suffering is extreme, it all but surely will be deemed unbearable eventually.

Thus, in the real world, any large amount of extreme suffering is likely to include black suffering too, and therefore, regardless of whether we think some black suffering is worse than any amount of red suffering, the only reasonable thing to do in practice is to avoid getting near the abyss altogether.

Bias Alert: We Prefer to Not Think About Extreme Suffering

As noted above, merely thinking about extreme suffering can evoke unpleasant feelings that we naturally prefer to avoid. And this is significant for at least two reasons. First, it suggests that thinking deeply about extreme suffering might put our mental health at risk, and hence that we have good reason, and a strong personal incentive, to avoid engaging in such deeper thinking. Second, in part for this first reason, it suggests that we are biased against thinking deeply about extreme suffering, and hence biased against properly appreciating the true horror and disvalue of such suffering. Somewhat paradoxically, (the mere thought of) the horror of extreme suffering keeps us from fully appreciating the true scope of this horror. And this latter consideration is significant in the context of trying to fairly evaluate the plausibility of views that say we should give special priority to such suffering, including the view presented above.

Indeed, one can readily tell a rather plausible story about how many of the well-documented biases we reviewed previously might conspire to produce such a bias against appreciating the horror of suffering.[24] For one, we have wishful thinking, our tendency to believe as true what we wish were true, which in this case likely pulls us toward the belief that it can’t be that bad, and that, surely, there must be something of greater value, some grander quest worth pursuing in this world than the mere negative, rather anti-climatic “journey” of alleviating and preventing extreme suffering. Like most inhabitants of Omelas, we wishfully avoid giving much thought to the bad parts, and instead focus on all the good — although our sin is, of course, much greater than theirs, as the bad parts in the real world are indescribably worse on every metric, including total amount, relative proportions, and intensity.

To defend this wishfully established view, we then have our confirmation bias. We comfortably believe that it cannot really be that bad, and so in perfect confirmation bias textbook-style, we shy away from and ignore data that might suggest otherwise. We choose not to look at the horrible real-world examples that might change our minds, and to not think too deeply about the arguments that challenge our merry conceptions of value and ethics. All of this for extremely good reasons, of course. Or at least so we tell ourselves.[25]

Next, we have groupthink and, more generally, our tendency to conform to our peers. Others do not seem to believe that extreme suffering is that horrible, or that reducing it should be our supreme goal, and thus our bias to conform smoothly points us in the same direction as our wishful thinking and confirmation bias. That direction being: “Come on, lighten up! Extreme suffering is probably not that bad, and it probably can be outweighed somehow. This is what I want to believe, it is what my own established and comfortable belief says, and it is what virtually all my peers seem to believe. Why in the world, then, would I believe anything else?”

Telling such a story of bias might be considered an unfair move, a crude exercise in pointing fingers at others and exclaiming “You’re just biased!”, and admittedly it is to some extent. Nonetheless, I think two things are worth noting in response to such a sentiment. First, rather than having its origin in finger pointing at others, the source of this story is really autobiographical: it is a fair characterization of how my own mind managed to repudiate the immense horror and primacy of extreme suffering for a long time. And merely combining this with the belief that I am not a special case then tentatively suggests that a similar story might well apply to the minds of others too.

Second, it should be noted that a similar story cannot readily be told in the opposite direction — about the values defended here. In terms of wishful thinking, it is not particularly wishful or feel-good to say that extreme suffering is immensely bad, and that there is nothing of greater value in the world than to prevent it. That is not a pretty or satisfying story for anyone. The view also seems difficult to explain via an appeal to confirmation bias, since many of those who hold this view of extreme suffering, including myself, did not hold it from the outset, but instead changed their minds toward it upon considering arguments and real-world examples that support it. The same holds true of our tendency to conform to our peers. For although virtually nobody appears to seriously doubt that suffering has disvalue, the view that nothing could be more important than preventing extreme suffering does not seem widely held, much less widely expressed. It lies far from the narrative about the ultimate mission and future purpose of humanity that prevails in most circles, which runs more along the lines of “Surely it must all be worth it somehow, right?”

This last consideration about how we stand in relation to our peers is perhaps especially significant. For the truth is that we are a signalling species: we like to appear cool and impressive.[26] And to express the view that nothing matters more than the prevention of extreme suffering seems a most unpromising way of doing so. It has a strong air of darkness and depression about it, and, worst of all, it is not a signal of strength and success, which is perhaps what we are driven the most to signal to others, prospective friends and mates alike. Such success signalling is not best done with darkness, but with light: by exuding happiness, joy, and positivity. This is the image of ourselves, including our worldview, that we are naturally inclined to project, which then ties into the remark made above — that this view does not seem widely held, “much less widely expressed”. For even if we are inclined to hold this view, we appear motivated to not express it, lest we appear like a sad loser.

 

In sum, by my lights, effective altruism proper is equivalent to effectively reducing extreme suffering. This, I would argue, is the highest meaning of “improving the world” and “benefiting others”, and hence what should be considered the ultimate goal of effective altruism. The principle of sympathy for intense suffering argued for here stems neither from depression, nor resentment, nor hatred. Rather, it simply stems, as the name implies, from a deep sympathy for intense suffering.[27] It stems from a firm choice to side with the evaluations of those who are superlatively worst off, and from this choice follows a principled unwillingness to allow the creation of such suffering for the sake of any amount of happiness or any other intrinsic good. And while it is true that this principle has the implication that it would have been better if the world had never existed, I think the fault here is to be found in the world, not the principle.

Most tragically, some pockets of the universe are in a state of insufferable darkness — a state of black suffering. In my view, such suffering is like a black hole that sucks all light out of the world. Or rather: the intrinsic value of all the light of the world pales in comparison to the disvalue of this darkness. Yet, by extension, this also implies that there is a form of light whose value does compare to this darkness, and that is the kind of light we should aspire to become, namely the light that brightens and prevents this darkness.[28] We shall delve into how this can best be done shortly, but first we shall delve into another issue: our indefensibly anthropocentric take on altruism and “philanthropy”.


 

(For the full bibliography, see the end of my book.)

[1] This view is similar to what Brian Tomasik calls consent-based negative utilitarianism: http://reducing-suffering.org/happiness-suffering-symmetric/#Consent-based_negative_utilitarianism
And the Organisation for the Prevention of Intense Suffering (OPIS) appears founded upon a virtually identical principle: http://www.preventsuffering.org/
I do not claim that this view is original; merely that it is important.

[2] And I have read them all, though admittedly not their complete works. Bentham can seem to come close in chapter 4 of his Principles of Morals and Legislation, where he outlines a method for measuring pain and pleasure. One of the steps of this method consists in summing up the values of “[…] all the pleasures on one side and of all the pains on the other.” And later he writes of this process that it is “[…] applicable to pleasure and pain in whatever form they appear […]”. Yet he does not write that the sum will necessarily be finite, nor, more specifically, whether every instance of suffering necessarily can be outweighed by some pleasure. I suspect Bentham, as well as Mill and Sidgwick, never contemplated this question in the first place.

[3] A recommendable essay on the issue is Simon Knutsson’s “Measuring Happiness and Suffering”: https://foundational-research.org/measuring-happiness-and-suffering/

[4] However, a defender of tranquilism would, of course, question whether we are indeed talking about a pleasure outweighing some suffering rather than it, upon closer examination, really being a case of a reduction of some form of suffering outweighing some other form of suffering

[5] And therefore, if one assumes a framework of so-called moral uncertainty, it seems that one should assign much greater plausibility to negative value lexicality than to positive value lexicality (cf. https://foundational-research.org/value-lexicality/), also in light of the point made in the previous chapter that many have doubted the positive value of happiness (as being due to anything but its absence of suffering), whereas virtually nobody has seriously doubted the disvalue of suffering.

[6] But what if there are several levels of extreme suffering, where an experience on each level is deemed so bad that no amount of experiences on a lower level could outweigh it? This is a tricky issue, yet to the extent that these levels of badness are ordered such that, say, no amount of level I suffering can outweigh a single instance of level II suffering (according to a subject who has experienced both), then I would argue that we should give priority to reducing level II suffering. Yet what if level I suffering is found to be worse than level II suffering in the moment of experiencing it, while level II suffering is found to be worse than level I suffering when it is experienced? One may then say that the evaluation should be up to some third experience-moment with memory of both states, and that we should trust such an evaluation, or, if this is not possible, we may view both forms of suffering as equally bad. Whether such dilemmas arise in the real world, and how to best resolve them in case they do, stands to me as an open question.
Thus, cf. the point about the lack of clarity and specification of values we saw two chapters ago, the framework I present here is not only not perfectly specific, as it surely cannot be, but it is admittedly quite far from it indeed. Nonetheless, it still comprises a significant step in the direction of carving out a clearer set of values, much clearer than the core value of, say, “reducing suffering”.

[7] A similar example is often used by the suffering-focused advocate Inmendham.

[8] This is, of course, essentially the same claim we saw a case for in the previous chapter: that creating happiness at the cost of suffering is wrong. The principle advocated here may be considered a special case of this claim, namely the special case where the suffering in question is deemed irredeemably bad by the subject.

[9] Cf. the gut feeling many people seem to have that the scenario described in The Ones Who Walk Away from Omelas should not be brought into the world regardless of how big the city of Omelas would be. Weak support for this claim is also found in the following survey, in which a plurality of people said that they think future civilization should strive to minimize suffering (over, for instance, maximizing positive experiences): https://futureoflife.org/superintelligence-survey/

[10] https://www.hedweb.com/negutil.htm
A personal anecdote of mine in support of Pearce’s quote is that I tend to write and talk a lot about reducing suffering, and yet I am always unpleasantly surprised by how bad it is when I experience even just borderline intense suffering. I then always get the sense that I have absolutely no idea what I am talking about when I am talking about suffering in my usual happy state, although the words I use in that state are quite accurate: that it is really bad. In those bad states I realize that it is far worse than we tend to think, even when we think it is really, really bad. It truly is inconceivable, as Pearce writes, since we simply cannot simulate that badness in a remotely faithful way when we are feeling good, quite analogously to the phenomenon of binocular rivalry, where we can only perceive one of two visual images at a time.

[11] https://ripeace.wordpress.com/2012/09/14/the-murder-of-junko-furuta-44-days-of-hell/
https://en.wikipedia.org/wiki/Murder_of_Junko_Furuta

[12] https://www.youtube.com/watch?v=PcnH_TOqi3I

[13] https://www.youtube.com/watch?v=Lc63Rp-UN10

[14] https://www.abolitionist.com/reprogramming/maneaters.html

[15] http://www.simonknutsson.com/the-seriousness-of-suffering-supplement

[16] http://www.simonknutsson.com/the-seriousness-of-suffering-supplement

[17] Dax describes the accident himself in the following video:
https://www.youtube.com/watch?v=M3ZnFJGmoq8

[18] Brülde, 2010, p. 576; Benatar, 2006, p. 63.

[19] And if one thinks such extreme suffering can be outweighed, an important question to ask oneself is: what exactly does it mean to say that it can be outweighed? More specifically, according to whom, and measured by what criteria, can such suffering be outweighed? The only promising option open, it seems, is to choose to prioritize the assessments of beings who say that their happiness, or other good things about their lives, can outweigh the existence of such extreme suffering — i.e. to actively prioritize the evaluations of such notional beings over the evaluations of those enduring, by their own accounts, unoutweighable suffering. What I would consider a profoundly unsympathetic choice.

[20] This once again hints at the point made earlier that we in practice are unable to specify in precise terms 1) what we value in the world, and 2) how to act in accordance with any set of plausible values. Rough, qualified approximations are all we can hope for.

[21] http://reducing-suffering.org/the-horror-of-suffering/
http://reducing-suffering.org/on-the-seriousness-of-suffering/
http://www.simonknutsson.com/the-seriousness-of-suffering-supplement
https://www.youtube.com/watch?v=RyA_eF7W02s&

[22] Or one could equivalently say that it betrays the core virtue of being consistent, as it amounts to treating/valuing similar beings differently.

[23] I make a more elaborate case for this conclusion in my book You Are Them.

[24] One might object that it makes little sense to call a failure to appreciate the value of something a bias, as this is a moral rather than an empirical disagreement, to which I would respond: 1) the two are not as easy to separate as is commonly supposed (cf. Putnam, 2002), 2) one clearly can be biased against fairly considering an argument for a moral position — for instance, we can imagine an example where someone encounters a moral position and then, due to being brought up in a culture that dislikes that moral position, fails to properly engage with and understand this position, although this person would in fact agree with it upon reflection; such a failure can fairly be said to be due to bias — and 3) at any rate, the question concerning what it is like to experience certain states of consciousness is a factual matter, including how horrible they are deemed from the inside, and this is something we can be factually wrong about as outside observers.

[25] Not that sparing our own mental health is not a good reason for not doing something potentially traumatizing, but the question is just whether it is really worth letting our view of our personal and collective purpose in life be handicapped and biased, at the very least less well-informed than it otherwise could be, for that reason. Whether such self-imposed ignorance can really be justified, both to ourselves and the world at large.

[26] Again, Robin Hanson and Kevin Simler’s book The Elephant in the Brain makes an excellent case for this claim.

[27] And hence being animated by this principle is perfectly compatible with living a happy, joyous, and meaningful life. Indeed, I would argue that it provides the deepest meaning one could possibly find.

[28] I suspect both the content and phrasing of the last couple of sentences are inspired by the following quote I saw written on Facebook by Robert Daoust: “What is at the center of the universe of ethics, I suggest, is not the sun of the good and its play of bad shadows, but the black hole of suffering.”

Suffering-Focused Ethics

This essay was first published as a chapter in my book Effective Altruism: How Can We Best Help Others? which is available for free download here.


The view of values I would favor falls within a broader class of ethical views one may call suffering-focused ethics, which encompasses all views that give special priority to the alleviation and prevention of suffering. I will review some general arguments and considerations in favor of such views in this chapter, arguments that individually and collectively can support granting moral priority to suffering.[1] This general case will then be followed by a more specific case for a particular suffering-focused view — what I consider to be the strongest and most convincing one — in the next chapter.

It should be noted, however, that not all effective altruists agree with this view of values. Many appear to view the creation of happiness — for example, via the creation of new happy beings, or by raising the level of happiness of the already happy — as having the same importance as the reduction of “equal” suffering. I used to hold this view as well. Yet I have changed my mind in light of considerations of the kind presented below.[2]

The Asymmetries

We have already briefly visited one asymmetry that seems to exist, at least in the eyes of many people, between suffering and happiness, namely the so-called Asymmetry in population ethics, which roughly says that we have an obligation to avoid bringing miserable lives into the world, but no obligation to bring about happy lives. To the extent we agree with this view, it appears that we agree that we should assign greater moral value and priority to the alleviation and prevention of suffering over the creation of happiness, at least in the context of the creation of new lives.

A similar view has been expressed by philosopher Jan Narveson, who has argued that there is value in making people happy, but not in making happy people.[3] Another philosopher who holds a similar view is Christoph Fehige, who defends a position he calls antifrustrationism, according to which we have obligations to make preferrers satisfied, but no obligations to make satisfied preferrers.[4] Peter Singer, too, has expressed a similar view in the past:

The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out. […] Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all.[5]

In terms of how we choose to prioritize our resources, there does indeed, to many of us at least, seem something highly unpalatable, not to say immoral and frivolous, about focusing on creating happiness de novo rather than on alleviating and preventing suffering first and foremost. As philosopher Adriano Mannino has expressed it:

What’s beyond my comprehension is why turning rocks into happiness elsewhere should matter at all. That strikes me as okay, but still utterly useless and therefore immoral if it comes at the opportunity cost of not preventing suffering. The non-creation of happiness is not problematic, for it never results in a problem for anyone (i.e. any consciousness-moment), and so there’s never a problem you can point to in the world; the non-prevention of suffering, on the other hand, results in a problem.[6]

And in the case of extreme suffering, one can argue that the word “problem” is a strong contender for most understated euphemism in history. Mannino’s view can be said to derive from what is arguably an intuitive and common-sense “understanding of ethics as being about solving the world’s problems: We confront spacetime, see wherever there is or will be a problem, i.e. a struggling being, and we solve it.”[7]

Simon Knutsson has expressed a similar sentiment to the opportunity cost consideration expressed by Mannino above, and highlighted the crucial juxtaposition we must consider:

When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”[8]

Philosopher David Benatar defends an asymmetry much stronger than the aforementioned Asymmetry in population ethics, as he argues that we not only should avoid bringing (overtly) miserable lives into existence, but that we ideally should avoid bringing any lives into existence at all, since coming into existence is always a harm on Benatar’s account. Explained simply, Benatar’s main argument rests on the premise that the absence of suffering is good, while the absence of happiness is not bad, and hence the state of non-existence is good (“good” + “not bad” = “good”), whereas the presence of suffering and happiness is bad and good respectively, and hence not a pure good, which renders it worse than the state of non-existence according to Benatar.[9]

Beyond this asymmetry, Benatar further argues that there is an asymmetry in how much suffering and happiness our lives contain — e.g. that the worst forms of suffering are far worse than the best pleasures are good; that we almost always experience some subtle unpleasantness, dissatisfaction, and preference frustration; and that there are such negative things as chronic pain, impairment, and trauma, yet no corresponding positive things, like chronic pleasure.[10] And the reason that we fail to acknowledge this, Benatar argues, is that we have various, well-documented psychological biases which cause us to evaluate our lives in overly optimistic terms.[11]

It seems worth expanding a bit on this more quantitative asymmetry between the respective badness and goodness of suffering and happiness. For even if one rejects the notion that there is a qualitative difference between the moral status of creating happiness and preventing suffering — e.g. that a failure to prevent suffering is problematic, while a failure to create happiness is not — it seems difficult to deny Benatar’s claim that the worst forms of suffering are far worse than the best of pleasures are good. Imagine, for example, that we were offered ten years of the greatest happiness possible on the condition that we must undergo some amount of hellish torture in order to get it. How much torture would we be willing to endure in order to get this prize? Many of us would reject the offer completely and prefer a non-existent, entirely non-problematic state over any mixture of hellish torture and heavenly happiness.

Others, however, will be willing to accept the offer and make a sacrifice. And the question is then how big a sacrifice one could reasonably be willing to make? Seconds of hellish torture? A full hour? Perhaps even an entire day? Some might go as far as saying an entire day, yet it seems that no matter how much one values happiness, no one could reasonably push the scale to anywhere near 50/50. That is, no one could reasonably choose to endure ten years of hellish torture in order to attain ten years of sublime happiness.

Those who would be willing to endure a full day of torture in order to enjoy ten years of paradise are, I think, among those who are willing to push it the furthest in order to attain such happiness, and yet notice how far they are from 50/50. We are not talking 80/20, 90/10, or even 99/1 here. No, one day of hell for 3650 days of paradise roughly corresponds to a “days of happiness to days of suffering” ratio of 99.97 to 0.03. And that is for those who are willing to push it.[12]

So not only is there no symmetry here; the moral weight of the worst of suffering appears to be orders of magnitude greater than that of the greatest happiness, which implies that the prevention of suffering appears the main name of the ethical game on any plausible moral calculus. Even on a view according to which we are willing to really push it and endure what is, arguably by most accounts, an unreasonable amount of suffering in order to gain happiness, the vast majority of moral weight is still found in preventing suffering, at least when speaking in terms of durations of the best and worst potential states. And one can reasonably argue that this is also true of the actual state of the world, as Arthur Schopenhauer did when comparing “the feelings of an animal engaged in eating another with those of the animal being eaten.”[13]

A more general and qualitative asymmetry between the moral status of happiness and suffering has been defended by philosopher Karl Popper:

I believe that there is, from the ethical point of view, no symmetry between suffering and happiness, or between pain and pleasure. […] In my opinion human suffering makes a direct moral appeal, namely, the appeal for help, while there is no similar call to increase the happiness of a man who is doing well anyway. A further criticism of the Utilitarian formula “Maximize pleasure” is that it assumes a continuous pleasure-pain scale which allows us to treat degrees of pain as negative degrees of pleasure. But, from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure. Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all; […][14]

David Pearce, who identifies as a negative utilitarian, describes his view in a similar way:

Ethical negative-utilitarianism is a value-system which challenges the moral symmetry of pleasure and pain. It doesn’t question the value of enhancing the happiness of the already happy. Yet it attaches value in a distinctively moral sense of the term only to actions which tend to minimise or eliminate suffering. This is what matters above all else.[15]

Neither Popper nor Pearce appear to deny that there is value in happiness. Instead, what they deny is that the value there may be in creating happiness is comparable to the value of reducing suffering. In Pearce’s words, increasing the happiness of the already happy does not carry value in the distinctively moral sense that reducing suffering does; in Popper’s words, suffering makes a direct moral appeal for help, while the state of those who are doing well does not.

Expressed in other words, one may say that the difference is that suffering, by its very nature, carries urgency, whereas the creation of happiness does not, at least not in a similar way. (Popper put it similarly: “[…] the promotion of happiness is in any case much less urgent than the rendering of help to those who suffer […]”[16]) We would rightly rush to send an ambulance to help someone who is enduring extreme suffering, yet not to boost the happiness of someone who is already happy, no matter how much we may be able to boost it. Similarly, if we had pills that could raise the happiness of those who are already doing well to the greatest heights possible, there would be no urgency in distributing these pills (to those already doing well), whereas if a single person fell to the ground in unbearable agony right before us, there would indeed be an urgency to help. Increasing the happiness of the already happy is, unlike the alleviation of extreme suffering, not an emergency.

A similar consideration about David Pearce’s abolitionist project described in the previous chapter — the abolition of suffering throughout the living world via biotechnology — appears to lend credence to this asymmetrical view of the moral status of the creation of happiness versus the prevention of suffering. For imagine we had completed the abolitionist project and made suffering non-existent for good. The question is then whether it can reasonably be maintained that our moral obligations would be exactly the same after this completion. Would we have an equally strong duty or obligation to move sentience to new heights after we had abolished suffering? Or would we instead have discharged our prime moral obligation, and thus have reason to lower our shoulders and breathe a deep and justified sigh of moral relief? I think the latter.

Another reason in favor of an asymmetrical view is that, echoing Benatar somewhat, it seems that the absence of extreme happiness cannot be considered bad in remotely the same way that the absence of extreme suffering can be considered good. For example, if a person is in a state of dreamless sleep rather than having the experience of a lifetime, this cannot reasonably be characterized as a disaster or a catastrophe; the difference between these two states does not seem to carry great moral weight. Yet when it comes to the difference between sleeping and being tortured, we are indeed talking about a difference that does carry immense moral weight, and the realization of the worse rather than the better outcome would indeed amount to a catastrophe.

The final asymmetry I shall review in this section is one that is found more on a meta-level, namely in the distribution of views concerning the moral value of the creation of happiness and the prevention of suffering. For in our broader human conversation about what has value, very few seem to have seriously disputed the disvalue of suffering and the importance of preventing it. Indeed, to the extent that we can find a value that almost everyone agrees on, it is this: suffering matters. In contrast, there are many who have disputed the value and importance of creating more happiness, including many of the philosophers mentioned in this section; many thinkers in Eastern philosophy for whom moksha, liberation from suffering, is the highest good; as well as many thinkers in Western philosophy, with roots all the way back to Epicurus, for whom ataraxia, an untroubled state free from distress, was the highest aim. Further elaboration on a version of this view of happiness follows in the next section.

This asymmetry in consensus about the value and moral status of creating happiness versus preventing suffering also counts as a weak reason for giving greater priority to the latter.

Tranquilism: Happiness as the Absence of Suffering

Author Lukas Gloor defends a view he calls tranquilism, which — following Epicurus and his notion of ataraxia, as well as the goal of moksha proposed as the highest good by many Eastern philosophers[17] — holds that the value of happiness lies in its absence of suffering.[18] Thus, according to tranquilism, states of euphoric bliss are not of greater value than, say, states of peaceful contentment free of any negative components. Or, for that matter, than a similarly undisturbed state of dreamless sleep or insentience. In other words, states of happiness are of equal value to nothing, provided that they are shorn of suffering.

In this way, tranquilism is well in line with the asymmetry in moral status between happiness and suffering defended by Karl Popper and David Pearce: that increasing the happiness of the already happy does not have the moral value that reducing suffering does. And one may even argue that it explains this asymmetry: if the value of happiness lies in its absence of suffering, then it follows that creating happiness (for those not suffering) cannot take precedence over reducing suffering. Moving someone from zero to (another kind of) zero can never constitute a greater move on the value scale than moving someone from a negative state to a (however marginally) less negative one.[19]

To many of us, this is a highly counter-intuitive view, at least at first sight. After all, do we not seek pleasure almost all the time, often at the seemingly justified cost of suffering? Yet one can frame this seeking in another way that is consistent with tranquilism, by viewing our search for pleasure as really being an attempt to escape suffering and dissatisfaction. On this framing, what appears to be going from neutral to positive is really going from a state of negativity, however subtle, to a state that is relieved, at least to some extent, from this negativity. So, on this view, when we visit a friend we have desired to see for some time, we do not go from a neutral to a positive state, but instead just remove our craving for their company and the dissatisfaction caused by their absence. So too with the pleasure of physical exercise: it is liberating in that it gives us temporary freedom from the bad feelings and moods that follow from not exercising. Or even the pleasure of falling in love, which provides refreshing relief from the boredom and desire we are otherwise plagued by.

Psychologist William James seemed to agree with this view of happiness:

Happiness, I have lately discovered, is no positive feeling, but a negative condition of freedom from a number of restrictive sensations of which our organism usually seems the seat. When they are wiped out, the clearness and cleanness of the contrast is happiness. This is why anaesthetics make us so happy.[20]

As did Arthur Schopenhauer:

[…] evil is precisely that which is positive,[21] that which makes itself palpable, and good, on the other hand, i.e. all happiness and gratification, is that which is negative, the mere abolition of a desire and extinction of a pain.[22]

And here is how Lukas Gloor explains it:

In the context of everyday life, there are almost always things that ever so slightly bother us. Uncomfortable pressure in the shoes, thirst, hunger, headaches, boredom, itches, non-effortless work, worries, longing for better times. When our brain is flooded with pleasure, we temporarily become unaware of all the negative ingredients of our stream of consciousness, and they thus cease to exist. Pleasure is the typical way in which our minds experience temporary freedom from suffering, which may contribute to the view that happiness is the symmetrical counterpart to suffering, and that pleasure, at the expense of all other possible states, is intrinsically important and worth bringing about.[23]

One may object that the implication that mere contentment has the same value as the greatest euphoric bliss seems implausible, and thus counts against tranquilism. Yet whether this is indeed implausible depends on the eyes that look. For consider it this way: if someone who experiences “mere contentment” without any negative cravings[24] whatsoever, and thus does not find the experience insufficient in any way, who are we to say that they are wrong about their state, and that they actually should want something better? Tranquilism denies that such a “merely content” person is wrong to claim that their state is perfect. Indeed, tranquilism is here in perfect agreement with this person, and hence this implication of tranquilism is at least not implausible from this person’s perspective, which one may argue is the most relevant perspective to consider in this context of discussing whether said person is in a suboptimal state. The perspective from which this implication appears implausible, a proponent of tranquilism may argue, is only from the perspective of someone who is not in perfect contentment — one who desires euphoric bliss, for oneself and others, and in some sense feels lacking, i.e. a negative craving, about its absence.

Another apparent, and perhaps intuitive, reason to reject tranquilism is that it appears to imply that happiness is not really that wonderful — that the best experience one has ever had was not really that great. Yet it is important to make clear that tranquilism implies no such thing. On the contrary, according to tranquilism, experiences of happiness without any suffering are indeed (together with other experiential states that are absent of suffering) experiences of the most wonderful kind, and they are by no means less wonderful than they are felt. What tranquilism does say, however, is that the value of such states is due to their absence of suffering, and that the creation of such happy states cannot justify the creation of suffering.

Yet even so, even while allowing us to maintain the view that happiness is wonderful, tranquilism is still, at least for many of us, really not a nice way to think about the world, and about the nature of value in particular, as we would probably all like to think that there exists something of truly positive value in the realm of conscious experience beyond merely the absence of negative experiences or cravings. Yet this want of ours — this negative craving, one could say — should only make us that much more skeptical of any reluctance we may have to give tranquilism a fair hearing. And even if, upon doing so, one does not find tranquilism an entirely convincing or exhaustive account of the respective (dis)value of happiness and suffering, it seems difficult to deny that there is a significant grain of truth to it.

The implications of tranquilism are clear: creating more happiness (for the currently non-existent or otherwise not suffering) has neutral value, while there is value in the alleviation and prevention of suffering, a value that, as noted above, nobody seriously questions.

Creating Happiness at the Cost of Suffering Is Wrong

In this section I shall not argue for a novel, separate point, but instead invoke some concrete examples that help make the case for a particular claim that follows directly from many of the views we have seen above, the claim being that it is wrong to create happiness at the cost of suffering.

One obvious example of such gratuitous suffering would be that of torturing a single person for the enjoyment of a large crowd.[25] If we think happiness can always outweigh suffering, we seem forced to say that, yes, provided that the resulting enjoyment of the crowd is great enough, and if other things are equal, then such happiness can indeed outweigh and justify torturing a single person. Yet that seems misguided. A similar example to consider is that of a gang rape: if we think happiness can always outweigh suffering, then such a rape can in principle be justified, provided that the pleasure of the rapists is sufficiently great. Yet most people would find this proposition utterly wrong.

One may object that these thought experiments bring other issues into play than merely that of happiness versus suffering, which is a fair point. Yet we can in a sense control for these by reversing the purpose of these acts so that they are about reducing suffering rather than increasing happiness for a given group of individuals. So rather than the torture of a single person being done for the enjoyment of a crowd, it is now done in order to prevent a crowd from being tortured; rather than the rape being done for the pleasure of, say, five people, it is done to prevent five people from being raped. While we may still find it most unpalatable to give the go signal for such preventive actions, it nonetheless seems clear that torturing a single person in order to prevent the torture of many people would be the right thing to do, and that having less rape occur is better than having more.

A similar example, which however does not involve any extreme suffering, is the situation described in Ursula K. Le Guin’s short story The Ones Who Walk Away from Omelas. The story is about an almost utopian city, Omelas, in which everyone lives an extraordinarily happy and meaningful life, except for a single child who is locked in a basement room, fated to live a life of squalor:

The child used to scream for help at night, and cry a good deal, but now it only makes a kind of whining, “eh-haa, eh-haa,” and it speaks less and less often. It is so thin there are no calves to its legs; its belly protrudes; it lives on a half-bowl of corn meal and grease a day. It is naked. Its buttocks and thighs are a mass of festered sores, as it sits in its own excrement continually.[26]

The story ends by describing some people in the city who appear to find the situation unacceptable and who choose not to take part in it any more — the ones who walk away from Omelas.

The relevant question for us to consider here is whether we would walk away from Omelas, or perhaps rather whether we would choose to bring a condition like Omelas into existence in the first place. Can the happy and meaningful lives of the other people in Omelas justify the existence of this single, miserable child? Different people have different intuitions about it; some will say that it depends on how many people live in Omelas. Yet to many of us, the answer is “no” — the creation of happiness is comparatively frivolous and unnecessary, and it cannot justify the creation of such a victim, of such misery and suffering.[27] A sentiment to the same effect was expressed in the novel The Plague, by Albert Camus: “For who would dare to assert that eternal happiness can compensate for a single moment’s human suffering?”[28]

A “no” to the creation of Omelas would also be supported by the Asymmetry in population ethics, according to which it has neutral value to add a happy life to Omelas, while adding this one miserable child has negative value, and hence the net value of the creation of Omelas is negative.

The examples visited above all argue for the claim that it is wrong to impose certain forms of suffering on someone for the sake of creating happiness, where the forms of suffering have gradually been decreasing in severity. And one may argue that the plausibility of the claims these respective examples have been used to support has been decreasing gradually too, and for this very reason: the less extreme the suffering, the less clear it is that happiness could never outweigh it. And yet even in the case of the imposition of the mildest of suffering — a pinprick, say — for the sake of the creation of happiness, it is far from clear, upon closer examination, that this should be deemed permissible, much less an ethical obligation. Echoing the passage by Camus above, would it really be right to impose a pinprick on someone in order to create pleasure for ourselves or others, or indeed for the very person we do it on, provided that whomever would gain the happiness is doing perfectly fine already, and thus that the resulting happiness would not in fact amount to a reduction of suffering? Looking only at, or rather from, the perspective of that moment’s suffering itself, the act would indeed be bad, and the question is then what could justify such badness, given that the alternative was an entirely trouble-free state. If one holds that being ethical means to promote happiness over suffering, not to create happiness at the cost of suffering, the answer is “nothing”.

Two Objections

Finally, it is worth briefly addressing two common objections against suffering-focused ethics, the first one being that not many people have held such a view, which makes it appear implausible. The first thing to say in response to this claim is that, even if it were true, the fact that a position is not widely held is not a strong reason to consider it implausible, especially if one thinks one has strong, object-level reasons to consider it plausible, and, furthermore, if one believes there are human biases[29] that can readily explain its (purportedly) widespread rejection. The second thing to say is that the claim is simply not true, as there are many thinkers, historical as well as contemporary ones, who have defended views similar to those outlined here (see the following note for examples).[30]

Another objection is that suffering-focused views have unappealing consequences, including that, according to such views, it would be right to kill everyone (or “destroy the world”). One reply to this claim is that at least some suffering-focused views do not have this implication. For example, in his book The Battle for Compassion: Ethics in an Apathetic Universe, Jonathan Leighton argues for a pragmatic position he calls “negative utilitarianism plus”, according to which we should aim to do our best to reduce preventable suffering, yet where we can still “categorically refuse to intentionally destroy the planet and eliminate ourselves and everything we care about in the process […]”.[31]

Another reply is that, as Simon Knutsson has argued at greater length,[32] other ethical views that have a consequentialist component seem about as vulnerable to similar objections. For instance, if maximizing the sum of happiness minus suffering were our core objective, it could be said that we ought to kill people in order to replace them with happier beings. One may then object, quite reasonably, that this is unlikely to be optimal in practice, yet one can argue — as reasonably, I believe — that the same holds true of trying to destroy the world in order to reduce suffering: it does not seem the best we can do in practice. I shall say a bit more about this last point in the penultimate chapter on future directions.

 

Having visited this general case for suffering-focused ethics, we shall now turn to what is arguably the strongest case for such a view — the appeal to sympathy for intense suffering.


 

(For the full bibliography, see the end of my book.)

[1] This chapter is inspired by other resources that also advocate for suffering-focused ethics, such as the following:
https://foundational-research.org/the-case-for-suffering-focused-ethics/
https://www.utilitarianism.com/nu/nufaq.html
https://www.youtube.com/watch?v=4OWl5nTctYI
https://www.hedweb.com/negutil.htm
Pearce, 2017, part II
A more elaborate case for focusing on suffering can be found in Jamie Mayerfeld’s Suffering and Moral Responsibility.

[2] Not least have I changed my mind about whether a term like “equal suffering” is at all meaningful in general.

[3] Narveson, 1973.

[4] Fehige, 1998.

[5] Singer, 1980b. However, Singer goes on to say about this view of coming into existence that it “perhaps, is a reason to combine [preference and hedonistic utilitarianism]”. Furthermore, Singer seems to have moved much closer toward, and to now defend, hedonistic utilitarianism, whereas he was arguably primarily a preference utilitarian when he made the quoted statement.

[6] Quoted from a Facebook conversation.

[7] https://foundational-research.org/the-case-for-suffering-focused-ethics/

[8] http://www.simonknutsson.com/the-one-paragraph-case-for-suffering-focused-ethics

[9] Benatar, 2006, chapter 2.

[10] Benatar, 2006, chapter 3.

[11] Benatar, 2006, chapter 3.

[12] One may object that our choosing such a skewed trade-off is merely a reflection of our contingent biology, and that it may be possible to create happiness so great that most people would consider a single day of it worth ten years of the worst kinds of suffering our biology can support. To this I would respond that such a possibility remains hypothetical, indeed speculative, and that we should base our views mainly on the actualities we know rather than such hypothetical (and wishful) possibilities. After all, it may also be, indeed it seems about equally likely, that suffering can be far worse than the worst suffering our contingent biology can support, and, furthermore, it may be that the pattern familiar from our contingent biology only repeats itself in this realm of theoretical maxima; i.e. that such maximal suffering can only be deemed far more disvaluable than the greatest bliss possible can be deemed valuable.

[13] Schopenhauer, 1851/1970, p. 42.

[14] Popper, 1945/2011, note 2 to chapter 9.

[15] https://www.hedweb.com/negutil.htm

[16] Popper, 1945/2011, note 6 to chapter 5.

[17] Some version of the concept of moksha is central to most of the well-known Eastern traditions, such as Buddhism (nirvana), Hinduism, Jainism, and Sikhism (mukti).

[18] https://foundational-research.org/tranquilism/
Thus, the view is not that happiness is literally the absence of suffering, which is, of course, patently false — insentient rocks are obviously not happy — but rather that the value of happiness lies in its absence of suffering.

[19] It should be noted, however, that one need not hold this tranquilist view of value in order to agree with Popper’s and Pearce’s position. For example, one can also view happiness as being strictly more valuable than nothing, while still maintaining that the value of raising the happiness of the already happy is always less than the value of reducing suffering. An intuitive way of formalizing this view would be by representing the value of states of suffering with negative real numbers, while representing the value of states of pure happiness with hyperreal numbers greater than 0, yet smaller than any positive real number, allowing us to assign some states of pure happiness greater value than others. On tranquilism, by contrast, all states of (pure) happiness would be assigned exactly the value 0.

[20] James, 1901.

[21] The terms “positive” and “negative” here respectively refer to the presence and absence of something.

[22] Schopenhauer, 1851/1970, p. 41.

[23] https://foundational-research.org/tranquilism/

[24] I happen to disagree with Gloor’s particular formulation of tranquilism when he writes: “According to tranquilism, a state of consciousness is negative or disvaluable if and only if it contains a craving for change.” For it seems to me that even intense cravings for change (for a different sex position, say) can feel perfectly fine and non-negative; that euphoric desire, say, is not an oxymoron. The term “negative cravings” avoids this complication.

[25] There are various versions of this example. A common one is whether it can be right to make gladiators fight for the enjoyment of a full colosseum, which is often raised as a problematic question for (certain versions of) utilitarianism.

[26] Guin, 1973/1992.

[27] And even though many will probably insist that the child’s suffering is a worthy sacrifice, the fact that it only takes a single life of misery to bring the value of a whole paradisiacal city into serious question, as it seems to do for most people, is yet another strong hint that there is an asymmetry between the (dis)value of happiness and suffering.

[28] Camus, 1947/1991, p. 224.

[29] Cf. Benatar, 2006, chapter 3.

[30] See section 2.2.14 here https://www.utilitarianism.com/nu/nufaq.html as well as http://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

[31] Leighton, 2011, p. 96.

[32] http://www.simonknutsson.com/the-world-destruction-argument/

Why I Used to Consider the Absence of Sentience Tragic

Whether one considers the absence of sentience bad or neutral — or indeed as good as can be — can matter a lot for one’s ethical and altruistic priorities. Specifically, it can have significant implications for whether one should push for smaller or larger future populations.

I used to be a classical utilitarian. Which is to say, I used to agree with the statement “we ought to maximize the net amount of happiness minus suffering in the world”. And given this view, I found it a direct, yet counter-intuitive implication that the absence of sentience is tragic, and something we ought to minimize by bringing about a maximally large, maximally happy population. My aim in this essay is to briefly present what I consider the main reason why I used to believe this, and also to explain why I no longer hold this view. I am not claiming the reasons I had for believing this are shared by other classical utilitarians, yet I suspect they could be, at least by some.

The Reason: Striving for Consistency

My view that the absence of sentience is tragic and something we ought to prevent mostly derived, I believe, from a wish to be consistent. Given the ostensibly reasonable assumption that death is bad, it would seem to follow, I reasoned, that since death merely amounts to a discontinuation of life — or, seen in a larger perspective, a reduction of the net amount of sentience — the reduction of sentience caused by not giving birth to a new (happy) life should be considered just as bad as the end of a (happy) life. This was counter-intuitive, of course, yet I did not, and still do not, consider immediate intuitions to be the highest arbiters of moral wisdom, and so it did not seem that weird to accept this conclusion. The alternative, if I were to be consistent, would be to bring my view of death in line with my intuition that the absence of sentience is not bad. Yet this was too implausible, since death surely is bad.

This, I believe, was the reasoning behind my considering it a moral obligation to produce a large, happy population. To not do it would, in some ways, be the moral equivalent of committing genocide. My view is quite different now, however.

My Current View of My Past View

I now view this past reasoning of mine as akin to a deceptive trick, like a math riddle where one has to find where the error was made in a series of seemingly valid deductions. You accept that death is tragic. Death means less sentient life than continued life, other things being equal. But a failure to bring a new individual into the world also means less sentient life, other things being equal. So why would you not consider a failure to bring an individual into the world tragic as well?

My current response to this line of reasoning is that death indeed is bad, yet that it is not intrinsically so. What is bad about death, I would argue, is the suffering it causes; not the discontinuation of sentience per se (after all, a discontinuation of sentience occurs every night we go to sleep, which we rarely consider bad, much less tragic). This view is perfectly consistent with the view that it is not tragic to fail to create a new individual.

As I have argued elsewhere, it is somewhat to be expected that we humans consider the death of a close relative or group member to be tragic and highly worth avoiding, given that such a death would tend, evolutionarily speaking, to have been costly to our own biological success in the past. In other words, our view that death is tragic may in large part stem from a penalizing mechanism instilled in us by evolution to prevent us from losing fellow assets who served our hidden biological imperative — assets who had invested a lot into us and whom we had invested a lot into in return. And I believe that my considering the absence of sentience tragic was, crudely speaking, a matter of extending this penalizing mechanism so that it pertained to all insentient parts of the universe. An extension I now consider misguided. I now see nothing tragic whatsoever about the fact that there is no sentient life on Mars.

Other Reasons

There may, of course, be other reasons why a classical utilitarian, including my past self, would consider the absence of sentience tragic. For instance, it seems reasonable to suspect us, or at least many of us, to have an inbuilt drive to maximize the number of our own descendants, or to maximize the future success of our own tribe (the latter goal would probably have aligned pretty well with the former throughout our evolutionary history). It is not clear what would count as “our own tribe” in modern times, yet it seems that many people, including many classical utilitarians, now view humanity as their notional tribe.

A way to control for such a hidden drive, then, would be to ask whether we would accept if the universe were filled up with happy beings who do not belong to our own tribe. For example, would we accept if our future light cone were filled up by happy aliens who, in their quest to maximize net happiness, replaced human civilization with happier beings? (i.e. a utilitronium shockwave of sorts.) An impartial classical utilitarian would happily accept this. The question is whether a human classical utilitarian would too?

Darwinian Intuitions and the Moral Status of Death

“Nothing in biology makes sense except in the light of evolution”, wrote evolutionary biologist Theodosius Dobzhansky. And given that our moral psychology is, at least in large part, the product of our biology, one can reasonably make a similar claim about our moral intuitions: that we should seek to understand these intuitions in light of the evolutionary history of our species. This also seems important for our thinking about normative ethics, since such an understanding is likely to help inform our ethical judgments; by helping us better understand the origin of our intuitive moral judgments, and how they might be biased in various ways.

An Example: “Julie and Mark”

A commonly cited example that seems to demonstrate how evolution has firmly instilled certain moral intuitions into us is the following thought experiment, first appearing in a paper by Jonathan Haidt:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love?

According to Haidt: “Most people who hear the above story immediately say that it was wrong for the siblings to make love […]”. Yet most people also have a hard time explaining this wrongness, given that the risks of inbreeding are rendered moot in the thought experiment. But they still insist it is wrong. An obvious interpretation to make, then, is that evolution has hammered the lesson “sex between close relatives is wrong” into the core of our moral judgments. And given the maladaptive outcomes of human inbreeding, such an intuition would indeed make a lot of evolutionary sense. Indeed, in that context, given the high risk of harm, it even makes ethical sense. Yet in a modern context in which birth control has been invented and is employed, the intuition suddenly seems on less firm ground, at least ethically.

(It should be noted that the deeper point of Haidt’s paper cited above is to argue that “[…] moral reasoning is usually a post hoc construction, generated after a judgment has been reached.” And while it seems difficult to deny that there is a significant grain of truth to this, Haidt’s thesis has also been met with criticism.)

Moral Intuitions About Death: Biologically Contingent

With this idea in the back of our heads — that evolution has most likely shaped our moral intuitions significantly, and that we should perhaps not be that surprised if these intuitions are often difficult to defend within the realm of normative ethics — let us now proceed to look at the concrete issue of death. Yet before we look at the notional “human view of death”, it is perhaps worth first surveying some other species whose members are unlikely to view death in remotely the same way as we do, to see just how biologically contingent our view of death probably is.

For example, individuals belonging to species that practice sexual cannibalism — i.e. where the female eats the male prior to, during, or after copulation — seem most unlikely to view dying in this manner in remotely the same way as we humans would. Indeed, they might even find pleasure in it, both male and female (although in many cases, the male probably does not, especially when he is eaten prior to copulation, since it is not in his reproductive interest, which likely renders it yet another instance of the horrors of nature).

The same can likely be said of species that practice so-called matriphagy, i.e. where the offspring eat their own mother, sometimes while she is still alive. This behavior is also, at least in many cases, evolutionarily adaptive, and hence seems unlikely to be viewed as harmful by the mother (or at least the analogue of “viewed as harmful” found in the minds of these creatures). There may, of course, be many exceptions — cases in which the mother does indeed find herself harmed by, and disapproving of, the act. Yet it nonetheless seems clear that the beings who have evolved to practice this behavior do not view such a death in remotely the same way as a human mother would if her children suddenly started eating her alive.

The final example I wish to consider here is the practice of so-called filial cannibalism: when parents eat their own offspring. This practice is much more common, in terms of the number of species that practice it, compared to the other forms of cannibalism mentioned above, and also a clearer case of convergent evolution, as the species that practice it range from insects to mammals, including some cats, primates, birds, amphibians, fish (where it is especially prevalent), snails, and spiders. Again, we should expect individuals belonging to these species to view deaths of this kind very differently from the way we humans would view such, by any human standard, bizarre deaths. This is not to say that the younglings who are eaten do not suffer a great deal in these cases. They likely often do, as being eaten is often not in their reproductive interests (in terms of propagating their genes), although it may be in the case of some species: if it increases the reproductive success of their parents and/or siblings to a sufficient degree.

The deeper point, again, is that beings who belong to these species are unlikely to feel remotely the same way about these deaths as we humans would if such deaths were to occur within the human realm — i.e. if human parents ate their own children. And more generally: that the evolutionary history of a species greatly influences how it feels about deaths of various kinds, as well as how it views death in general.

Naturally, Most Beings Care Little About Most Deaths

It seems plausible to say that, in most animal species, individuals do not care the least about the death of unrelated individuals within their own species. And we should not be too starry-eyed about humans in this regard either, as it is not clear that we humans, historically, have cared much for people whom we did not view as belonging to our in-group, as the cruelties of history, as well as modern-day tribalism, testify. Only in recent times, it seems, have we in some parts of the world made all of humanity our in-group. Not all sentient beings, sadly, but not merely our own family or ethnic group either, fortunately.

So, both looking at other species, as well as across human history, we see that there appears to be a wide variety of views and intuitions about different kinds of deaths, and how “problematic” or harmful they are. Yet one regard in which there is much less disagreement is when it comes to “the human view of death”. Or more precisely: the natural moral intuitions humans have with respect to the death of someone in the in-group. And I would suspect this particular view to strongly influence — and indeed be the main template for — any human attempt to carve out a well-reasoned and general view of the moral status of death (of any morally relevant being). If this is true, it would seem relevant to zoom in on how we humans naturally view such an in-group death, and why.

The Human View of an In-group Death

So what is the human view of the death of someone belonging to our own group? In short: that it is tragic and something worth avoiding at great costs. And if we take our evolutionary glasses on, it seems easy to make sense of why we would be naturally inclined to think this: for most of our evolutionary history, we humans have lived in groups in which individuals collaborated in ways that benefitted the entire group.

In other words, the ability of any given human individual to survive and reproduce has depended significantly on the efforts of fellow group members, which means that the death of such a fellow group member would be very costly, in biological terms, to other individuals in that group. Something that is worth investing a lot to prevent for these other individuals. Something evolution would not allow them to be indifferent about in the least, much less happy about.

This may help resolve some puzzles. For example, many of us claim to hold a purely sentiocentric ethical view according to which consciousness is the sole currency of moral value: the presence and absence of consciousness, as well as its character, is what matters. Yet most people who claim to hold such a view, including myself, nonetheless tend to view dreamless sleep and death very differently, although both ultimately amount to an absence of conscious experience just the same. If the duration of the conscious experience of someone we care about is reduced by an early death, we consider this tragic. Yet if the duration of their conscious experience is instead reduced by dreamless sleep, we do not, for the most part, consider this tragic at all. On the contrary, we might even be quite pleased about it. We wish sound, deep sleep for our friends and family, and often view such sleep as something that is well-deserved and of great value.

On the view that the presence and absence of consciousness, as well as the quality of this consciousness, is all that matters, this evaluation makes little sense (provided we keep other things equal in our thought experiment: the quality of the conscious life is, when it is present, the same whether its duration is reduced by sleep or early death). Yet from an evolutionary perspective, it makes perfect sense why we would not only evaluate these two things differently, but indeed in completely opposite ways. For if a fellow group member is sleeping, then this is good for the rest of the group, as sleep is generally an investment that improves a person’s contribution to the group. Yet if the person is dead, they will no longer be able to contribute to the group. And if they are family, they will no longer be able to propagate the genes of the family. From a biological perspective, this is very sad.

(The hypothesis sketched out above — that our finding the death of an in-group member sad and worth avoiding at great costs is in large part due to their contribution to the success of our group, and ultimately our genes — would seem to yield a prediction: we should find the death of a young person who is able to contribute a lot to the group significantly more sad and worth avoiding compared to the death of an old person who is not able to contribute. And this is even more true if the person is also a relative, since the young person would have the potential to spread family genes, whereas a sufficiently old person would not.)

Implications

So what follows in light of these considerations about our “natural” view of the death of an in-group member? I would be hesitant to draw strong conclusions from such considerations alone. Yet it seems to me that they do, at the very least, give us reason to be skeptical with respect to our immediate moral intuitions about death (indeed, I would argue that we should be skeptical of our immediate moral intuitions in general). With respect to the great asymmetry in our evaluation of the ethical status of dreamless sleep versus death, two main responses seem available if one is seeking to make a pure sentiocentric position consistent (to take that fairly popular ethical view as an example).

Either one can view conscious life reduced by sleep as being significantly more bad, intrinsically, than what we intuitively evaluate it to be (classical utilitarians may choose to adopt this view, which could, in practice, imply that one should work on a cure for sleep, or at least to reduce sleep in a way that keeps quality of life intact). Or, one can view conscious life reduced by an early death as being significantly less bad, again intrinsically, than our moral intuitions hold. (One can, of course, also opt for a middleroad that maintains that we both intuitively underestimate the intrinsic badness of sleep while overestimating the intrinsic badness of death, and that we should bring our respective evaluations of these two together to meet somewhere in the middle.)

I favor the latter view: that we strongly overestimate the intrinsic badness of death, which is, of course, an extremely unpalatable view to our natural intuitions, including my own. Yet it must also be emphasized that the word “intrinsically” is extremely important here. For I would indeed argue that death is bad, and that we should generally view it as such. But I believe this badness is extrinsic rather than intrinsic: because death generally has bad consequences for sentient beings, including that the process of dying itself tends to involve a lot of suffering (where I would view this suffering as intrinsically bad, yet not the end of the life per se). And furthermore, I would argue that we should consider death a bad and harmful thing (as I indeed do) not just because this belief is accurate, but also because not doing so has bad consequences as well.

An Ethic of Survival

With respect to ethics and death, I recently encountered an interesting perspective in an exchange with Robert Daoust. He suggested, as I understood him, that the fundamental debate in ethics is ultimately one between an ethic of survival on the one hand, and an ethic of concern for sentience on the other. And he further noted that, even when we sincerely believe that we subscribe to the latter, we often in fact do support the survivalist ethic, for strong evolutionary reasons. A view according to which, even if life is significantly dominated by suffering, survival should still be our highest goal.

I find this view of Daoust’s interesting, and I certainly recognize strong survivalist intuitions in myself, even as I claim to hold, and publicly defend, values focused primarily on the reduction of suffering. And one can reasonably wonder what the considerations surveyed above, as well as similar considerations about the priorities and motives that evolution has naturally instilled in us, imply for our evaluation of such a (perhaps tacitly shared) survivalist ethic?

I would tentatively suggest that they imply we should view this survivalist ethic with skepticism. We should expect evolution to have given us a strong urge for survival at virtually any cost, and to view survival — if not of our own individual bodies, then at least of our own group and bloodline — as being intrinsically important; arguably even the most important thing of all. Yet I would argue that this is an implausible ethical view. Specifically, to accept continued survival at virtually any cost, including the cost of increasing the net amount of extreme suffering in the world, is, I would argue, highly implausible. Beyond that, one can argue that we, for evolutionary reasons, also wildly overestimate the ethical badness of an empty world, and grossly misjudge the value of the absence of sentience. Indeed, on a pure sentiocentric view, such an absence is just as good as deep, dreamless sleep. And what is so bad about that?

The Endeavor of Reason

ponde”[…] some hope a divine leader with prophetic voice
Will rise amid the gazing silent ranks.
An idle thought! There’s none to lead but reason,
To point the morning and the evening ways.”

— Abu al-ʿAlaʾ al-Maʿarri

 

What is reason?

One could perhaps say that answering this question itself falls within the purview of reason. But I would simply define reason as the capacity of our minds to decide or assess what makes the most sense, or seems most reasonable, all things considered.

This seems well in line with other definitions of reason. For instance, Google defines reason as “the power of the mind to think, understand, and form judgements logically”, and Merriam-Webster gives the following definitions:

(1) the power of comprehending, inferring, or thinking[,] especially in orderly rational ways […] (2) proper exercise of the mind […]

These definitions all seem to raise the further question of what terms like “logically”, “orderly rational ways”, and “proper” then mean in this context.

Indeed, one may accuse all these definitions of being circular, as they merely seem to deflect the burden of defining reason by referring to some other notion that ultimately just appears synonymous with, and hence does not reductively define, reason. This would also seem to apply to the definition I gave above: “the ability to decide or assess what seems most reasonable all things considered”. For what does it mean for something to “seem most reasonable”?

Yet the open-endedness of this definition does not, I submit, render it useless or empty by any means, any more than defining science in open-ended terms such as “the attempt to discover what is true about the world” renders this definition useless or empty.

Reason: The Core Value of Universities and the Enlightenment

At the level of ideals, working out what seems most reasonable all things considered is arguably the core goal of both the Enlightenment and of universities. For instance, ideally, universities are not committed to a particular ethical view (say, utilitarianism or deontology), nor to a particular view of what is true about the world (say, string theory or loop quantum gravity, or indeed physicalism in general).

Rather, universities seem to have a more fundamental and less preconceived commitment, at least in the ideal, which is to find out which particular views, if any, that seem the most plausible in the first place. This means that all views can be questioned, and that one has to provide reasons if one wants one’s view to be considered plausible. 

And it is important to note in this context that “plausible” is a broader term than “probable”, in that the latter pertains only to matters of truth, whereas the former covers this and more. That is, plausibility can also be assigned to views, for instance ethical views, that we do not view as strictly true, yet which we find plausible nonetheless (as in: they seem agreeable or reasonable to us).

For this very reason, it would also be problematic to view the fundamental role of universities to (only) be the uncovering of what is true, as such a commitment may assume too much in many important and disputed academic discussions, such as those about ethics and epistemology, where the question of whether there indeed are truths in the first place, and in what sense, is among the central questions that are to be examined by reason. Yet in this case too, the core commitment remains: a commitment to being reasonable. To try to assess and follow what seems most reasonable all things considered.

This is arguably also the core value of the Enlightenment. At least that seems to be what Immanuel Kant argued for in his essay “What Is Enlightenment“, in which he further argued that free inquiry — i.e. the freedom to publicly exercise our capacity for reason — is the only prerequisite for enlightenment:

This enlightenment requires nothing but freedom—and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters.

And the view that reason should be our core commitment and guide of course dates much further back historically than the Enlightenment. Among the earliest and most prominent advocates of this view was Aristotle, who viewed a life lived in accordance with reason as the highest good.

Yet who is to say that what we find most plausible or reasonable is something we will necessarily be able to converge upon? This question itself can be considered an open one for reasoned inquiry to examine and settle. Kant, for instance, believed that we would all be able to agree if we reasoned correctly, and hence that reason is universal and accessible to all of us.

And interestingly, if one wants to make a universally compelling case against this view of Kant’s, it seems that one has to assume at least some degree of the universality that Kant claimed to exist. And hence it seems difficult, not to say impossible, to make such a case, and to deny that at least some aspects of reason are universal.

Being Reasonable: The Only Reasonable Starting Point?

One can even argue that it is impossible to make a case against reason in general. For as Steven Pinker notes:

As soon as we are having this conversation, as long as we are trying to persuade one another of why you should do something or should believe something, you are already committed to reason. We are not engaged in a fist fight, we are not bribing each other to believe something. We are trying to provide reasons. We are trying to persuade, to convince. As long as you are doing that in the first place — you are not hitting someone with a chair, or putting a gun to their head, or bribing them to believe something — you have lost any argument you have against reason; you have already signed on to reason, whether you like it or not. So the fact that we are having this conversation shows that we are committed to reason. That is the starting point.

Indeed, it seems that any effort to make a reasonable case against reason would have to rest on the very thing it attempts to question, namely our capacity to decide or assess what seems most reasonable all things considered. Thus, almost by definition, it seems impossible to identify a reasonable alternative to the endeavor of reason.

Some might argue that reason itself is unjustified, and that we have to have faith in reason, which then supposedly implies that a dedication to reason is ultimately no more reasonable or solid than is faith in anything whatsoever. Yet this is not the case.

For to say that reason needs justification is not to question reason, but rather to presuppose it, since the arena in which we are expected to provide reasons for what we believe is the arena of reason itself. Thus, if we accept that justifications for any given belief is required, then we have already signed on to reason, whereby we have also rejected faith — the idea that justification for some given belief is not required. Again, in trying to provide a justification for reason, or, for that matter, in trying to provide a justification for not accepting reason, one is already committed to the endeavor of reason: the endeavor of deciding or assessing what seems most reasonable, i.e. most justified, all things considered.

And what reasonable alternative could there possibly be to this endeavor? Which other endeavor could a reasoning agent reasonably choose to pursue? None, it seems to me. Universally, all reasoning agents seem bound to conclude that they have this imperative of reason: that they ought to do what seems most reasonable all things considered. That reason, in this sense, is the highest calling of such agents. Anything else would be contrary to what their own reasoning tells them, and hence unreasonable — by their own accounts.

It Seems Reasonable: The Bedrock Foundation of Reasonable Beliefs

The idea that reason demands justification for any given belief may seem problematic, as it gives rise to the so-called Münchhausen trilemma: what can ultimately justify our beliefs — a circular chain of justifications, an infinite chain, or a finite chain (or web) with brute facts at bottom? Supposedly, none of these options are appealing. Yet I disagree.

For I see nothing problematic about having a brute observation, or reason, at bottom of our chain of justification, which I would indeed argue is exactly what constitutes, and all that ever could constitute, the rock bottom justification for any reasonable belief. Specifically, that it just seems reasonable.

Many discussions go wrong here by conflating 1) ungrounded assumptions and 2) brute observations, which are by no means the same. For there is clearly a difference between believing that a car just drove by you based on the brute observation, i.e. a conscious sensation of, that a car just drove by you, and then merely assuming, without grounding in any reason or observation, that a car just drove by you.

Or consider another example: the fundamental constants in our physical equations. We ultimately have no deeper justification for the values of these constants than brute observation, and yet this clearly does not render our knowledge of these values merely assumed, much less arbitrarily or unjustifiably chosen. This is not to say that our observations of these values are infallible; future measurements may well yield slightly different or more precise values. Yet they are not arbitrary or unjustified.

The idea that brute observation cannot constitute a reasonable justification for a belief is, along with the idea that brute assumptions and brute observations are the same, a deeply misguided one, in my view. And this is not only true, I contend, of factual matters, but of all matters of reason, including ethics and epistemology, whether we deem these fields strictly factual or not. For instance, my own ethical view (which I have argued is a universal one), according to which suffering is disvaluable and ought to be reduced, does not, on my account, rest on a mere assumption. Rather, it rests on a brute observation of the undeniable intrinsic disvalue of the conscious states we call suffering. I have no deeper justification than this, nor is a deeper one required or even possible.

As I have argued elsewhere, such a foundationalist account is, I submit, the solution to the Münschhausen trilemma.

Deniers of Reason

If reason is the only reasonable starting point, why, then, do so many seem to challenge and reject it? There are a few things to say in response to this. First, those who criticize and argue against reason are not really, as I have argued above, criticizing reason, at least not in the general sense I have defined it here (since to criticize reason is to engage in it). Rather, they are, at most, criticizing a particular conception of reason, and that can of course be perfectly reasonable (I myself would criticize prevalent conceptions of reason as being much too narrow).

Second, there are indeed those who do not criticize reason, and who indeed do reject it, at least in some respects. These are people who refuse to join the conversation Steven Pinker referred to above; people who refuse to provide reasons, and who instead engage in forceful methods, such as silencing or extorting others, violently or otherwise. Examples include people who believe in some political ideology or religion, and who choose to suppress, or indeed kill, those who express views that challenge their own. Yet such actions do not pose a reasonable or compelling challenge to reason, nor can they be considered a reasonable alternative to the endeavor of reason.

As for why people choose to engage in such actions and refuse to engage in reason, one can also say a few things. First of all, the ability to engage in reason seems to require a great deal of learning and discipline, and not all of us are fortunate enough to have received the schooling and discipline required. And even then, even when we do have these things, engaging in reason is still an active choice that we can fail to make.

That is, doing what we find most reasonable is not an automatic, reflexive process, but rather a deliberate volitional one. It is clearly possible, for example, to act against one’s own better judgment. To go with seductive impulse and temptation — e.g. for sex, a cigarette, or social status — rather than what seems most reasonable, even to ourselves in the moment of weakness.

Reason Broadly and Developmentally Construed

The conception of reason I have outlined here is, it should be noted, not a narrow one. It is not committed to any particular ontological position, nor is it purely cerebral, as in restricted to merely weighing verbal or mathematical arguments. Instead, it is open to questioning everything, and takes input from all sources.

Nor would I be tempted to argue that we humans have some single, immutable faculty of reason that is infallible. Quite the contrary. Our assessments of what seems most reasonable in various domains rests on a wide variety of faculties and experiences, virtually none of which are purely innate. Indeed, these faculties, as well as our range of experience, can be continually expanded and developed as we learn more, both individually and collectively.

In this way, reason, as I conceive of it, is not only extremely broad but also extremely open-ended. It is not static, but rather self-regulating and self-updating, as when we realize that our thinking is tendentious and biased in many ways, and that our motives might not be what we (would like to) think they are. In this way, our capacity for reasoning has taught itself that it should be self-skeptical.

Yet this by no means gives way to pure skepticism. After all, our discovery of these tendencies is itself a testament to the power of our capacity to reason. Rather than completely undermine our trust in this capacity, discoveries of this kind simultaneously show both the enormous weakness and strength of our minds: how wrong we can be when we are not careful to try to be reasonable, and how much better informed we can become if we are. Such facts do not comprise a case against employing our capacity to reason, but rather a case for even more, even more careful employments of this capacity of ours.

Conclusion: A Call for Reason

As noted above, the endeavor of reason is not one that we pursue automatically. It takes a deliberate choice. In order to be able to assess and decide what seems most reasonable all things considered, one must first make an active effort to learn as much as one can about the nature of the world, and then consider the implications carefully.

What I have argued here is that there is no reasonable alternative to doing this; not that there is no possible alternative. For one can surely suspend reason and embrace blind faith, as many religious people do, or embrace unreasoned, incoherent, and self-refuting claims about reality, as many postmodernists do. Or one can go with whatever seems most pleasurable in the moment rather than what seems most reasonable all things considered, as we all do all too often. Yet one cannot reasonably choose such a suspension of reason. Indeed, merely not actively denying reason is not enough. The only reasonable choice, it seems, is to consciously choose to pursue the endeavor of reason.

In sum, I would join Aristotle in viewing reason, broadly construed, as our highest calling. That following what seems most reasonable all things considered is the best, most sensible choice before us. And hence that this is a choice we should all actively make.

 

 

Suffering, Infinity, and Universe Anti-Natalism

Questions that concern infinite amounts of value seem worth spending some time contemplating, even if those questions are of a highly speculative nature. For instance, if we assume a general expected value framework of a kind where we evaluate the expected value of a given outcome based on its probability multiplied by its value, then any more than an infinitesimal probability of an outcome that has infinite value would imply that this outcome has infinite expected value. And hence that the expected value of such an outcome would trump that of any outcome with a “mere” finite amount of value.

Therefore, on this framework, even strongly convinced finitists are not exempt from taking seriously the possibility that infinities, of one ethically relevant kind or another, may be real. For however strong a conviction one may hold, maintaining only an infinitesimal probability that infinite value outcomes of some sort could be real seems difficult to defend.

Bounding the Influence of Expected Value Thinking

It is worth making clear, as a preliminary note, that we may reasonably put a bound on how much weight we give such an expected value framework in our ethical deliberations, so as to avoid crazy conclusions and actions; or simply to preserve our sanity, which may also be a priority for some.

In fact, it is easy to point to good reasons for why we should constrain the influence of such a framework on our decisions. For although it seems implausible to entirely reject such an expected value framework in one’s moral reasoning, it would seem equally implausible to consider such a framework complete and exhaustive in itself. One reason being that thinking in terms of expected value is just one way to theorize about the world among many others, and it seems difficult to justify granting it a particularly privileged status among these, especially given a tool-like conception of our thinking: if all our thinking about the world is best thought of as a tool that helps us navigate in the world rather than a set of Platonic ideals that perfectly track truths in a transcendent way, it seems difficult to elevate a single class of these tools, such as thinking in terms of expected value, to a higher status than all others. But also given that we cannot readily put numbers on most things in practice, both due to a lack of time in most real-world situations and because, even when we do have time, the numbers we assign are often bound to be entirely speculative, if at all meaningful in the first place.

Just as we need more than theoretical physics to navigate in the physical world, it seems likely that we will do well to not only rely on an expected value framework to navigate the moral landscape, and this holds true even if all we care about is to maximize or minimize the realization of a certain class of states. Using only a single style of thinking makes us inherently vulnerable to mistakes in our judgments, and hence resting everything on one style of thinking without limits seems risky and unwise.

It therefore seems reasonable to limit the influence of this framework, and indeed any single framework, and one proposed way of doing so is by giving it only a limited number of the seats of one’s notional moral parliament; say, 40 percent of them. In this way, we should be better able to avoid the vulnerabilities of relying on a single framework, while remaining open to be guided by its inputs.

What Can Be the Case?

To get an overview, let us begin by briefly surveying (at least some of) the landscape of the conceivable possibilities concerning the size of the universe. Or, more precisely, the conceivable possibilities concerning the axiological size of the universe. For it is indeed possible, at least abstractly, for the universe to be physically finite, yet axiologically infinite; for instance, if some states of suffering are infinitely disvaluable, then a universe containing one or more of such states would be axiologically infinite, even if physically finite.

In fact, a finite universe containing such states could be worse, indeed infinitely worse, than even a physically infinite universe containing an infinite amount of suffering, if the states of suffering realized in the finite universe are more disvaluable than the infinitely many states of suffering found in the physically infinite universe. (I myself find the underlying axiological claim here more than plausible: that a single instance of certain states of suffering — torture, say — are more disvaluable than infinitely many instances of milder states of suffering, such as pinpricks.)

It is also conceivable that the universe is physically infinite, yet axiologically finite; if, for instance, our axiology is non-additive, if the universe contains only infinitesimal value throughout, or if only a freak bubble of it contains entities of value. This last option may seem impossibly unlikely, yet it is conceivable. Infinity does not imply infinite repetition; the infinite sequence ( 1, 0, 0, 0, … ) does not logically have to contain 1 again, and indeed doesn’t.

In terms of physical size, there are various ways in which infinity can be realized. For instance, the universe may be both temporally and spatially infinite in terms of its extension. Or it may be temporally bounded while spatially infinite in extension, or vice versa: be spatially finite, yet eternal. It should be noted, though, that these two may be considered equivalent, if we view only points in space and time as having value-bearing potential (arguably the only view consistent with physicalism, ultimately), and view space and time as a four-dimensional structure. Then one of these two universes will have infinite “length” and finite “breadth”, while the opposite is true of the other one, and a similar shape can thus be obtained via “90 degree” rotation.

Similarly, it is also conceivable (and apparently plausible) that the universe has a finite past and an infinite future, in which case it will always have a finite age, or it could have an infinite past and a finite future. Or, equivalently in spatial terms, be bounded in one spatial direction, yet have infinite extension in another.

Yet infinite extension is not the only conceivable way in which physical infinity may conceivably be realized. Indeed, a bounded space can, at least in one sense, contain more elements than an unbounded one, as exemplified by the cardinality of the real numbers in the interval (0, 1) compared to all the natural numbers. So not only might the universe be infinite in terms of extension, but also in terms of its divisibility — i.e. in terms of notional sub-worlds we may encounter as we “zoom down” at smaller scales — which could have far greater significance than infinite extension, at least if we believe we can use cardinality as a meaningful measure of size in concrete reality.

Taking this possibility into consideration as well, we get even more possible combinations — infinitely many, in fact. For example, we can conceive of a universe that is bounded both spatially and temporally, yet which is infinitely divisible. And it can then be infinitely divisible in infinitely many different ways. For instance, it may be divisible in such a way that it has the same cardinality as the natural numbers, i.e. its set of “sub-worlds” is countably infinite, or it could be divisible with the same cardinality as the real numbers, meaning that it consists of uncountably many “sub-worlds”. And given that there is no largest cardinality, we could continue like this ad infinitum.

One way we could try to imagine the notional place of such small worlds in our physical world is by conceiving of them as in some sense existing “below” the Planck scale, each with their own Planck scale below which even more worlds exist, ad infinitum. Many more interesting examples of different kinds of combinations of the possibilities reviewed so far could be mentioned.

Another conceivable, yet supremely speculative, possibility worth contemplating is that the size of the universe is not set in stone, and that it may be up to us/the universe itself to determine whether it will be infinite, and what “kind” of infinity.

Lastly, it is also conceivable that the size of the universe, both in physical and axiological terms, cannot faithfully be conceived of with any concept available to us. So although the conceivable possibilities are infinite, it remains conceivable that none of them are “right” in any meaningful sense.

What Is the Case? — Infinite Uncertainty?

Unfortunately, we do not know whether the universe is infinite or not; or, more generally, which of the possibilities mentioned above that are true of our condition. And there are reasons to think that we will never know with great confidence. For even if we were to somehow encounter a boundary encapsulating our universe, or otherwise find strong reasons for believing in one, how could we possibly exclude that there might not be something beyond that boundary? (Not to mention that the universe might still be infinitely divisible even if bounded.) Or, alternatively, even if we thought we had good reasons to believe that our universe is infinite, how can we be sure that the limited data we base that conclusion on can be generalized to locations arbitrarily far away from us? (This is essentially the problem of induction.)

Yet even if we thought we did know whether the universe is infinite with great confidence, the situation would arguably not be much different. For if we accept the proposition that we should have more than infinitesimal credence in any empirical claim about the world, what is known as Cromwell’s rule (I have argued that this applies to all claims, not just [stereotypically] “empirical” claims), then, on our general expected value framework, it would seem that any claim about the reality of infinite value outcomes should always be taken seriously, regardless of our specific credences in specific physical and axiological models of the universe.

In fact, not only should the conceivable realizations of infinity reviewed above be taken seriously (at least to the extent that they imply outcomes with infinite (dis)value), but so should a seemingly even more outrageous notion, namely that infinite (dis)value may rest on any particular action we do. However small a non-zero real-valued probability we assign such a claim — e.g. that the way you prepare your coffee tomorrow morning is going to impact an infinite amount of value — the expected value of getting the, indeed any, given action right remains infinite.

How should we act in light of this outrageous possibility?

Pascallian and Counter-Pascallian Claims

The problem, or perhaps our good fortune, is that, in most cases arguably, we do not seem to have reason to believe that one course of action is more likely to have an infinitely better outcome than another. For example, in the case of the morning coffee, we appear to have no more reason to believe that, say, making a strong cup of coffee will lead to infinitely more disvalue than making a mild one will, rather than it being the other way around. For such hypotheses, we seem able to construct an equal and oppositely directed counter-hypothesis.

Yet even if we concede that this is the case most of the time, what about situations where this is not the case? What about choices where we do have slightly better reasons to believe that one outcome will be infinitely better than another one?

This is difficult to address in the absence of any concrete hypotheses or scenarios, so I shall here consider the two specific cases, or classes of scenarios, where a plausible reason may be given in favor of thinking that one course of action will influence infinitely more value than another. One is the case of an eternal civilization: our actions may impact infinite (dis)value by impacting whether, and in what form, an eternal civilization will exist in our universe.

In relation to the (extremely unlikely) prospect of the existence of such a civilization, it seems that we could well find reasons to believe that we can impact an infinite amount of value. But the crucial question is: how? From the perspective of negative utilitarianism, it is far from clear what outcomes are most likely to be infinitely better than others. This is especially true in light of the other class of ways in which we may plausibly impact infinite value that I shall consider here, namely by impacting the creation of, or the unfolding of events in, parallel universes, which may eventually be infinitely numerous.

For not only could an eternal civilization that is the descendant of ours be better in “our universe” than another eternal civilization that may emerge in our place if we go extinct; it could also be better with respect to its effects on the creation of parallel universes, in which case it may be normative for negative utilitarians to work to preserve our civilization, contrary to what is commonly considered the ultimate corollary of negative utilitarianism (and this could also hold true if the temporal extension of our civilization is bound to be finite). Indeed, this could be the case even if no other civilization were to emerge instead of ours: if the impact our civilization will have on other universes results in less suffering than what would otherwise be created naturally. It is, of course, also likely that the opposite is the case: that the continuation of our civilization would be worse than another civilization or no civilization. And I must admit that I have no idea what is more likely to be the case.

So in these cases where reasons pointing more in one way than another plausibly could be found, it is not clear which direction that would be. Except perhaps in the direction that we should do more research on this question: which actions are more likely to reduce infinitely more suffering than others? Indeed, from the point of view of a suffering-focused expected value framework, it would seem that this should be our highest priority.

Ignoring Small Credences?

One may be skeptical of my claim above: can it really be true that the considerations, or at least my considerations, in the case of the continuation of civilization cancel out exactly? Is there not even the smallest difference? Not even a hunch?

In his paper on infinite ethics, Nick Bostrom argues that such an exact cancellation seems extraordinarily unlikely, and that small tips in balance seem to have counter-intuitive, if not catastrophic, consequences:

This cancellation of probabilities would have to be perfectly accurate, down to the nineteenth decimal place and beyond. […]

It would seem almost miraculous if these motley factors, which could be subjectively correlated with infinite outcomes, always managed to conspire to cancel each other out without remainder. Yet if there is a remainder—if the balance of epistemic probability happens to tip ever so slightly in one direction—then the problem of fanaticism remains with undiminished force. Worse, its force might even be increased in this situation, for if what tilts the balance in favor of a seemingly fanatical course of action is the merest hunch rather than any solid conviction, then it is so much more counterintuitive to claim that we ought to pursue it in spite of any finite sacrifice doing so may entail. The “exact-cancellation” argument threatens to backfire catastrophically.

I do not happen to share Bostrom view, however. Apart from the aforementioned bounding of the influence of expected value thinking, there are also ways to avoid such apparent craziness of letting our actions rest on the slightest hunch from within the expected value framework: disregarding sufficiently low credences.

Bostrom is skeptical of this approach:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria. The postulation of an exception from these criteria for very low-likelihood events is, at the very least, theoretically ugly.

Yet Bostrom here seems to ignore that “the value in question” is infinite for every action, cf. the point that we should maintain some small credence in every claim, including the claim that any given action may effect an infinite amount of (dis)value.

So in this way, no action we can point toward is fundamentally different from any other. The only difference is just what our credence is that a particular action may make “an infinite difference”, or that it makes “the greatest infinite difference”, compared to any other action. And when it comes to such credences, I would argue that it is utmost reasonable to ignore sufficiently small ones. In my view, to not do that would be the ugly thing, for the following reasons:

First, one could argue that, just as most models of physics break down beyond a certain range, it is reasonable to expect our ability to discriminate between different credence levels to break down when we reach a sufficiently fine scale. This is also well in line with the fact that it is generally difficult to put precise numbers on our credence levels with respect to specific claims. Thus, one could argue that we are way past the range of error of our intuitive credences when we reach the nineteenth decimal place.

This conclusion can also be reached via a rather different consideration: one can argue that our entire ontological and epistemological framework itself cannot be assumed credible with absolute certainty. Therefore, it would seem that our entire worldview, including this framework of assigning numerical values, or indeed any order at all, to our credences, should itself be assigned some credence of being wrong. And one can then argue, quite reasonably, that once we reach a level of credence in any claim that is lower than our level of credence in, say, the meaningfulness of ascribing credences in this way in the first place, this specific credence should be ignored, as it lies beyond what we consider the range of reliability of this framework in the first place.

In sum, I think it is fair to say that, when we only have a tiny credence that some action may be infinitely better than another, we should do more research and look for better reasons to act on rather than to act on these hunches. We can reasonably ignore exceptionally small credences in practice, as we indeed already do every time we make a decision based on calculations of finite expected values; we then ignore the tiny credence we should have that the value of the outcomes in question is infinite.

Infinitarian Paralyses?

Another thing Bostrom treats in his paper, actually the main subject of it, is whether the existence of infinite value implies, on aggregative consequentialist views, that it makes no difference what we do. As he puts it:

Aggregative consequentialist theories are threatened by infinitarian paralysis: they seem to imply that if the world is canonically infinite then it is always ethically indifferent what we do. In particular, they would imply that it is ethically indifferent whether we cause another holocaust or prevent one from occurring. If any non-contradictory normative implication is a reductio ad absurdum, this one is.

To elaborate a bit: the reason it is supposed to be indifferent whether we cause another holocaust is that the net sum of value in the universe supposedly is the same either way: infinite.

It should be noted, though, that whether this really is a problem depends on how we define and calculate the “sum of value”. And the question is then whether we can define this in a meaningful way that avoids absurdities and provides us with a useful ethical framework we can act on.

In my view, the solution to this conundrum is to give up our attachment to cardinal arithmetic. In a way, this is obvious: if you have an infinite set and add finitely many elements to it, you still have “the same as before”, in terms of the cardinality of the set. Yet, in another sense, we of course do not get “the same as before”, in that the new infinite set is not identical to the one we had before. Therefore, if we insist that adding another holocaust to a universe that already contains infinitely many holocausts should make a difference, we are simply forced to abandon standard cardinal arithmetic. In its stead, we should arguably just take our requirement as an axiom: that adding any amount of value to an infinity of value does make a difference — that it does change the “sum of value”.

This may seem simplistic, and one may reasonably ask how this “sum of value” could be defined. A simple answer is that we could add up whatever (presumably) finite difference we make within the larger (hypothetically) infinite world, and to then consider that the relevant sum of value that should determine our actions, what has been referred to as “the causal approach” to this problem.

This approach has been met with various criticisms, one of them being that it leaves “the total sum of value” unchanged. As Bostrom puts it:

One consequence of the causal approach is that there are cases in which you ought to do something, and ought to not do something else, even though you are certain that neither action would have any effect at all on the total value of the world.

I fail to see the appeal of this criticism, however, not least because it is deceptively phrased. For how is the “total value of the world” defined here? It is not the case that “the total value of the world” is left unchanged on every possible definition of these terms; it just is on one particular definition, indeed one we have good reason to consider implausible and irrelevant. And the reason was that it implies that adding another holocaust makes no difference to the “total value of the world”. It then seems a strange move to say that it counts against a theory that it holds the prevention of finitely many holocaust to be normative because this has no “effect at all on the total value of the world” — by this very implausible definition. If forced to choose between these two mutually exclusive starting points — adding a holocaust makes a difference to the total value of the world or it does not — I think it is an easy choice. If we can help alleviate the extreme suffering of just a single being, while keeping all else equal, this being will hardly agree that “the total value of the world” was left unchanged by our actions. Not in any sensible sense.

More than that, I also think that for an ethical theory to say that we should ignore whatever lies outside our sphere of influence should not be considered a weakness, but rather a strength. Imagine by analogy a hypothetical Earth identical to ours, with the two exceptions that 1) it has been inhabited by humans for an eternal and unalterable past, over which infinitely many holocausts have taken place, and 2) it has a finite future; the universe it inhabits will end peacefully in a hundred years. Now, if people on this Earth held an ethical theory that does not take this unalterable infinite past into account, and instead focuses on the finite future, including preventing holocausts from happening in that future, would this count against that theory in any way? I fail to see how it could, and yet this is essentially the same as taking the causal approach within an infinite universe, only phrased more “unilaterally”, i.e. more purely in temporal rather than spatio-temporal terms.

Another criticism that has been leveled against the causal approach is that we cannot rule out that our causal impact may in some sense be infinite, and therefore it is problematic to say that we should just measure the world’s value, and take action based on, whatever finite difference we make. Here is Bostrom again:

When a finite positive probability is assigned to scenarios in which it is possible for us to exert a causal effect on an infinite number of value-bearing locations […] then the expectation value of the causal changes that we can make is undefined. Paralysis will thus strike even when the domain of aggregation is restricted to our causal sphere of influence.

Yet these claims actually do not follow. First, it should again be noted that the situation Bostrom refers to here is in fact the situation we are always in: we should always assign a positive probability to the possibility that we may effect infinite (dis)value. Second, we should be clear that the scenario where we can impact an infinite amount of value, and where we aggregate over the realm we can influence, is fundamentally different from the scenario in which we aggregate over an infinite universe that contains an infinite amount of value that we cannot impact. To the extent there are threats of “infinitarian paralysis” in these two scenarios, they are not identical.

For example, Bostrom’s claim that “the expectation value of the causal changes that we can make is undefined” need not be true even on standard cardinal arithmetic, at least in the abstract (i.e. if we ignore Cromwell’s rule), in the scenario where we focus only on our own future light cone. For it could be that the scenarios in which we can “exert a causal effect on an infinite number of value-bearing locations” were all scenarios that nonetheless contained only finite (dis)value, or, on a dipolar axiology, only a finite amount of disvalue and an infinite amount of value. A concrete example of the latter could be a scenario where the abolitionist project outlined by David Pearce is completed in an eternal civilization after a finite amount of time.

Hence, it is not necessarily the case that “paralysis will strike even when the domain of aggregation is restricted to our causal sphere of influence”, apart from in the sense treated earlier, when we factor in Cromwell’s rule: how should we act given that all actions may effect infinite (dis)value? But again, this is a very different kind of “paralysis” than the one that appears to be Bostrom’s primary concern, cf. this excerpt from the abstract of his paper Infinite Ethics:

Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity.

Indeed, one can argue that the “Cromwell paralysis” in a sense negates this latter paralysis, as it implies that it may not be true that we can affect only a finite amount of good or bad, and, more generally, that we should assign a non-zero probability to the claim that we can optimize the value of the universe everywhere throughout, including in those corners that seem theoretically inaccessible.

Adding Always Makes a Difference

As for the infinitarian paralysis supposed to threaten the causal approach in the absence of the “Cromwell paralysis” — how to compare the outcomes we can impact that contain infinite amounts of value? — it seems that we can readily identify reasonable consequentialist principles to act by that should at least allow us to compare some actions and outcomes against each other, including, perhaps, the most relevant ones.

One such principle is the one alluded to in the previous section: that adding something of (dis)value always makes a difference, even if the notional set we are adding it to contains infinitely many similar elements already. In terms of an axiology that holds the amount of suffering in the world to be the chief measure of value, this principle would hold that adding/failing to prevent an instance of suffering always makes for a less valuable outcome, provided that other things are equal, which they of course never quite are in the real world, yet they often are in expectation.

The following abstract example makes, I believe, a strong case for favoring such a measure of (dis)value over the cardinal sum of the units of (dis)value. As I formulate this thought experiment, this unit will, in accordance with my own view, be instances of intense suffering in the universe, yet the point applies generally:

Imagine that we have a universe with a countably infinite amount of instances of intense suffering. We may visualize this universe as a unit ball. Now imagine that we perform an act in this universe that leaves the original universe unchanged, yet creates a new universe identical to the first one. The result is a new universe full of suffering. Imagine next that we perform this same act in a world where nothing exists. The result is exactly the same: the creation of a new universe full of suffering, in the exact same amount. In both cases, we have added exactly the same ball of infinite suffering. Yet on standard cardinal arithmetic, the difference the act makes in terms of the sum of instances of suffering is not the same in the two cases. In the first case, the total sum is the same, namely countably infinite, while there is an infinite difference in the second case: from zero to infinity. If we only count the difference added, however— the “delta universe”, so to speak— the acts are equally disvaluable in the two cases. The latter method of evaluating the (dis)value of the act seems far more plausible than does evaluation based on the cardinal sum of the units of (dis)value in the universe. It is, after all, the exact same act.

This is not an idle thought experiment. As noted above, impacting the creation of new universes is one of the ways in which we may plausibly be able to influence an infinite amount of (dis)value. Arguably even the most plausible one. Admittedly, it does rest on certain debatable assumptions about physics, yet these assumptions seem significantly more likely than does the possibility of the existence of an eternal civilization. For even disregarding specific civilization hostile facts about the universe (e.g. the end of stars and a rapid expansion of space that is thought to eventually rip ordinary matter apart), we should, for each year in the future, assign a probability strictly lower than 1 that civilization will go extinct that year, which means that the probability of extinction will be arbitrarily close to 1 within a finite amount of time.

In other words, an eternal civilization seems immensely unlikely, even if the universe were to stay perfectly life-friendly forever. The same does not seem true of the prospect of influencing the generation of new universes. As far as I can tell, the latter is in a ballpark of its own when it comes to plausible ways in which we may be able to effect infinite (dis)value, which is not to say that universe creation is more likely than not to become possible, but merely that it seems significantly more likely than other ways we know of in which we could effect infinite (dis)value (though, again, our knowledge of “such ways” is admittedly limited at this point, and something we should probably do more research on). Not only that, it is also something that could be relevant in the relatively near future, and more disvalue could depend on a single such near-future act of universe creation than what is found, intrinsically at least, in the entire future of our civilization. Infinitely more, in fact. Thus, one could argue that it is not our impact on the quality of life of future generations in our civilization that matters most in expectation, but our impact on the generation of universes by our civilization.

Universe Anti-Natalism: The Most Important Cause?

It is therefore not unthinkable that this should be the main question of concern for consequentialists: how does this impact the creation of new universes? Or, similarly, that trying to impact future universe generation should be the main cause for aspiring effective altruists. And I would argue that the form this cause should take is universe anti-natalism: avoiding, or minimizing, the creation of new universes.

There are countless ways to argue for this. As Brian Tomasik notes, creating a new universe that in turn gives rise to infinitely many universes “would cause infinitely many additional instances of the Holocaust, infinitely many acts of torture, and worse. Creating lab universes would be very bad according to several ethical views.”

Such universe creation would obviously be wrong from the stance of negative utilitarianism, as well as from similar suffering-focused views. It would also be wrong according to what is known as The Asymmetry in population ethics: that creating beings with bad lives is wrong, and something we have an obligation to not do, while failing to create happy lives is not wrong, and we have no obligation to bring such lives into being. A much weaker, and even less controversial, stance on procreative ethics could also be used: do not create lives with infinite amounts of torture.

Indeed, how, we must ask ourselves, could a benevolent being justify bringing so much suffering into being? What could possibly justify the Holocaust, let alone infinitely many of them? What would be our answer to the screams of “why” to the heavens from the torture victims?

Universe anti-natalism should also be taken seriously by classical utilitarians, as a case can be made that the universe is likely to end up being net negative in terms of algo-hedonic tone. For instance, it may well be that most sentient life that will ever exist will find itself in a state of natural carnage, as civilizations may be rare even on planets where sentient life has emerged, and because even where civilizations have emerged, it may be that they are unlikely to be sustainable, perhaps overwhelmingly so, implying that most sentient life might be expected to exist at the stage it has existed on for the entire history of sentient life on Earth. A stage where sentient beings are born in great numbers only for the vast majority of them to die shortly thereafter, for instance due to starvation or by being eaten alive, which is most likely a net negative condition, even by wishful classical utilitarian standards. Simon Knutsson’s essay How Could an Empty World Be Better than a Populated One? is worth reading in this context, and of course applies to “no world” as well.

And if one takes a so-called meta-normative approach, where one decides by averaging over various ethical theories, one could argue that the case against universe creation becomes significantly stronger; if one for instance combines an unclear or negative-leaning verdict from a classical utilitarian stance with The Asymmetry and Kantian ethics.

As for those who hold anti-natalism at the core of their values, one could argue that they should make universe anti-natalism their main focus over human anti-natalism (which may not even reduce suffering in expectation), or at the very least expand their focus to also encompass this, apparently esoteric position. Not only because the scale is potentially unsurpassable in terms of what prevents the most births, but it may also be easier, both because wishful thinking about “those horrors will not befall my creation” could be more difficult to maintain in the face of horrors that we know have occurred in the past, and because we do not seem as attached and adapted, biologically and culturally, to creating new universes as we are to creating new children. And just as anti-natalists argue with respect to human life, being against the creation of new universes need not be incompatible with a responsible sustainment of life in the one that does exist. This might also be a compromise solution that many people would be able to agree on.

Are Other Things Equal?

The discussion above assumes that the generation of a new universe would leave all else equal, or at least leave all else merely “finitely altered”. But how can we be sure that the generation of a new universe would not in fact prevent the emergence of another? Or perhaps even prevent many infinite universes from emerging? We can’t. Yet we do not appear to have any reason for believing that this is the case. As noted above, all else will often be equal in expectation, and that also seems true in this case. We can make counter-Pascallian hypotheses in both directions, and in the absence of evidence for any of them, we appear to have most reason to believe that the creation of a new universe results, in the aggregate, in a net addition of a new universe. But this could of course be wrong.

For instance, artificial universe creation would be dwarfed by the natural universe generation that happens all the time according to inflationary models, so could it not be that the generation of a new universe might prevent some of these natural ones from occurring? I doubt that there are compelling reasons for believing this, but natural universe generation does raise the interesting question of whether we might be able to reduce the rate of this generation. Brian Tomasik has discussed the idea, yet it remains an open, and virtually unexplored, research question. One that could dominate all other considerations.

It may be objected that considerations of identical, or virtually identical, copies of ourselves throughout the universe have been omitted in this discussion, yet as far as I can tell, including such considerations would not change the discussion in a fundamental way. For if universe generation is the main cause and most consequential action to focus on for us, more important even than the intrinsic importance of the entire future of our civilization, then this presumably applies to each copy of ourselves as well. Yet I am curious to hear arguments that suggest otherwise.

A final miscellaneous point I should like to add here is that the points made above may apply even if the universe is, and only ever will be, finite, as the generation of a new finite pocket universe in that case still could bring about far more suffering than what is found in the future light cone of our own universe.

Implications for Artificial Intelligence in Brief

The prospect of universe generation, and the fact that it may dominate everything else, also seems to have significant implications for our focus on the future of artificial intelligence, one of them being, as hinted above, that altruists should perhaps not focus on artificial intelligence as their main cause (and why we should be careful about claiming that it is clear that they should, as we may thereby risk overlooking crucial considerations). For instance, if artificial intelligence is sufficiently unlikely to ever “take over” in the way that is often feared, or if focusing directly on researching or arguing against universe generation has higher expected value.

Moreover, it suggests that, to the extent altruists indeed should focus primarily on artificial intelligence, this would be to the extent that artificial intelligence will determine the rate of universe generation in the universe. This might be the main thing to focus on when implementing “Fail-Safe” measures in artificial intelligence, or in any kind of future civilization, to the extent implementation of such measures is feasible.

 

In conclusion, the subjects of the potential to effect infinite (dis)value in general, and of impacting universe generation in particular, are extremely neglected at this point, and a case can be made that more research into such possibilities should be our top priority. It seems conceivable that a question related to such a prospect — e.g. should we create more universes? — will one day be the main ethical question facing our civilization, perhaps even one we will be forced to decide upon in a not too distant future. Given the potentially enormous stakes, it seems worth being prepared for such scenarios — including knowing more about their nature, how likely they are, and how to best act in them — even if they are unlikely.

Notes on the Utility of Anti-Speciesist Advocacy

I recently took part in a panel discussion, alongside Leah Edgerton, Tobias Leenaert, Oscar Horta, and Jens Tuider (moderator), on whether animal advocates should focus on veganism or anti-speciesism (I’ve outlined my own view here). In my opinion, the discussion went well, not least because there was a sense of a shared underlying goal among the panelists, as well as a high level of intellectual openness, humility, and friendliness.

Unfortunately, yet predictably, the limited time available for each person to speak in such a panel discussion meant that I didn’t get to make half of the points I wanted to (in spite of the fact that I, rather uncourteously, seemed to speak for a great plurality of the time of the discussion; my passion for the subject got the better of me, I’m afraid). And given that I had these unshared points written down already, it seemed worthwhile to publish them here for everyone to read.

Main Points: Scale and Receptivity

Two main points in favor of anti-speciesist advocacy that I did get to make, albeit briefly, have to do with scale and receptivity. In terms of scale, anti-speciesist advocacy is better than vegan advocacy, as well as other forms of advocacy that focus only on beings exploited by humans, in that it pertains to all non-human animals, including those who live in nature.

At an intuitive level, this may seem like a small point in favor of anti-speciesist advocacy. “+1 to anti-speciesist advocacy for being better in terms of scale.” Yet to think in this way is to fail to appreciate the actual numbers. Just as the much greater number of “farm animals” compared to the number of “pets” is a huge rather than small point in favor of focusing on the former rather than the latter in our advocacy, the much greater number of beings that anti-speciesist advocacy pertains to is an extremely significant point in its favor.

And, in terms of numbers at least, this analogy is actually strikingly accurate: the number of “farm animals” is, on some estimates, about a thousand times greater than the number of “pets”, while the number of non-human animals in nature is about a thousand times greater than the number of “farm animals”. A thousand times is a lot, and yet this is only counting vertebrates; the number is much greater if we include invertebrates in our considerations as well, as we should. In other words, if we include invertebrates in our considerations, the analogy to the ratio between “farm animals” and “pets” is actually much too weak. Yet our intuitions have a hard time appreciating such big numbers. Especially when the beings in question live in nature.

Thus, in terms of scale, the actions of many aspiring effective animal advocates may be more akin to donations to local animal shelters than they would like to think. This is not surprising. We humans are notorious group thinkers, and the animal movement has traditionally focused only on beings exploited by humans. Consequently, we should expect this history to bias us strongly toward that focus (objections to this controversial paragraph, e.g. “we should focus on beings exploited by humans first”, may be found answered here and here, as well as in the section on objections below).

The other main point in favor of anti-speciesist advocacy has to do with people’s receptivity toward anti-speciesist advocacy. In light of the above, one may think “sure, anti-speciesist advocacy is best in terms of scale, but will people be receptive to such advocacy? Isn’t it too abstract?”

This is an empirical question, and more research on it is urgently needed. Yet there are at least tentative reasons for thinking that people in fact are receptive to such advocacy, perhaps even more so than toward most other forms of advocacy. One line of evidence comes from Oscar Horta, who has delivered talks on speciesism and conducted surveys after these talks, whereby he found that, surprisingly, “most people who attended these talks accepted the arguments against speciesism.” Horta made more interesting findings than this, including that a focus on speciesism may be the best way to promote veganism, yet given that I have already reported on some of these findings elsewhere, and linked to his own report above, I shall not delve further into it here.

Another line of evidence comes from a study conducted by Vegan Outreach in 2016, in which they tested four different booklets against each other, one of which focused on the case against speciesism (another one was centered around a “reduce your consumption” message, another on the harms that “farm animals” suffer), and then examined which of them led to the greatest reduction in consumption of “animal products”. The results, in a nutshell, were that all the booklets caused a significant reduction in such consumption among readers, and that the booklet that focused on speciesism did the best of all the booklets, although the difference was not statistically significant.

In light of this (admittedly limited) data, we have reasons to think that, even if our only goal were to make people reduce their consumption of “animal products”, focusing on the case against speciesism is at least roughly as good as other, more traditional forms of advocacy.

And yet such a narrow focus cannot be defended. As I also argued during the panel discussion, we have an unfortunate tendency in our movement to view “total consumption of animal products” as a good measure of the quality of the (non-human) sentient condition on the planet, or at least of “how good we’re doing”. It is not. It only says something about a tiny fraction of the non-human beings on the planet, and we cannot defend excluding the rest, those not exploited directly by humans, in our considerations. This is not to say that such consumption is not an important measure to look at, merely that it is hopelessly insufficient.

In conclusion, when we combine these two considerations — a much greater scope in terms of the number of beings our advocacy pertains to, as well as a level of receptivity toward anti-speciesist advocacy that seems roughly as good as that of other forms of advocacy; and perhaps it is even better — we seem to have good reason to focus on anti-speciesist advocacy. And if we then factor in the neglectedness of such advocacy compared to the forms of advocacy and tactics we have traditionally been pursuing, including tech innovation such as in vitro meat, which has millions of US dollars in funding, the case becomes stronger still.

Objections Against Doing Anti-Speciesist Advocacy

But What About the Tractability of the Problem of Suffering in Nature?

While it is true that anti-speciesist advocacy seems optimal in terms of scale because it also includes wild animal suffering, one may object that the tractability of suffering in nature has been left out of the picture in this analysis.

In response to this, I would argue that the tractability of the problem of suffering in nature is highly uncertain at this point, yet given that the number of “wild animals” is more than a thousand times that of “farmed animals”, it would seem that the tractability of the problem of suffering in nature would have to be less than a thousand times smaller (to the extent we can meaningfully say such a thing) than the tractability of the problem of suffering for “farmed animals” in order for it to make sense to focus on the latter over the former. It is far from clear to me that this is the case. More than that, this all seems to rest on an assumed need to focus on one over the other, which then leads to the second point I would make in response to this objection.

For I do not believe we have to focus on one over the other. Anti-speciesist advocacy defends both “farmed animals” and “wild animals”, and, as seen above, it may be as successful with regard to the former as other forms of advocacy, implying that, even given high uncertainty concerning the tractability of wild animal suffering, anti-speciesist advocacy still seems a strategy worth pursuing. Again, in light of the notes on receptivity above, one could make a case that, even if we only cared about the wrongs done to beings exploited by humans, we should focus on anti-speciesist advocacy.

Similarly, even if there is a conflict between a focus on “wild animals” versus “farm animals, and even if suffering in nature indeed were a thousand times as intractable as suffering caused by direct human exploitation, the much greater neglectedness of wild animal suffering would make a case for doing advocacy that pertains to that, as anti-speciesist advocacy does.

I Don’t Think Non-Human Individuals in Nature Have Net Negative Lives

Opposing discrimination against individuals in nature in general, and defending the claim that we should help them to the extent we can in particular, does not rest on the claim that such beings live net negative lives, any more than the claim that we should not discriminate against other human individuals, and help them when we can — for instance under circumstances of famine or other catastrophes — rests on the claim that such humans have net negative lives.

(That being said, I have made a theoretical case for wildlife anti-natalism here, in which I argue that merely applying a non-speciesist position on procreative ethics implies that we should, in theory/if we can keep other things equal, prevent the births of the vast majority of non-human individuals in nature. More than that, I think we do tend to significantly underestimate how bad [at least many of] the lives of non-human beings in nature in fact are.)

Another point I would make in response to this claim is that even on the conservative assumption that only one in ten non-human individuals in nature have lives as bad as the average non-human individual cursed to live out their life on a factory farm, the big difference in terms of the number of beings in nature versus on factory farms still implies that there is more than a hundred times as many non-human beings living very bad lives in nature than there is on factory farms, meaning that even given such a relatively small “concentration of suffering” in nature, the greatest opportunity for reducing the most total suffering still lies here.

Isn’t Anti-Speciesism too Abstract?

More specifically: don’t we risk turning people off by seeming to claim that, say, a mosquito has the same moral value as an elephant?

I would make a few distinct points in response to this objection. First, to the extent this is a problem, we can say that anti-speciesism does not imply that all beings should be prioritized equally, just as total opposition to discrimination within the human species does not imply that, say, a human fetus has the same moral value as an adult human individual. The specific traits of a being do matter, and anti-speciesism does not demand us to overlook these differences, but rather to prioritize equal interests equally.

Second, I would argue that, to the extent anti-speciesism promotes more concern for smaller beings compared to other forms of advocacy, this is actually one of its main strengths rather than a weakness, as we generally underestimate the moral value of small beings. One way to see this is to consider the numbers. If we take fish, for instance, it is estimated that there are 10,000 times as many fish on the planet as there are humans, yet fish do not tend to weigh correspondingly strongly on our moral scale, even among animal advocates.

And if we consider invertebrates, our focus seems even more misaligned still, as it is estimated that there are about ten quintillion insects on the planet, ten to the power of nineteen, and yet we fail to take them seriously in moral terms for the most part. One might then object that the number of beings is not a good measure of moral value. Rather, one may argue, we should look at the total number of neurons for a better measure. Yet even if we adopt this as a proxy for moral value, the moral weight of the insect realm still appears staggering, as there are, on a rough estimate at least, a hundred times more insect neurons on the planet than there are human neurons.

(I am not claiming that the number of neurons is a perfect proxy for moral value by any means, but merely that no matter which of these simple, and probably not entirely meritless, measures we use, we appear to underestimate small beings a lot; Brian Tomasik’s Is Brain Size Morally Relevant? is quite apropos here, although I should note that I strongly disagree with his view of consciousness.)

Why we underestimate smaller beings is a question worth pondering, I think, and I believe we can readily identify at least three reasons. First, small beings, such as fish and insects, tend to be more numerous, which makes greater moral concern for them inconvenient, and we are generally biased against inconvenient updates. Second, smaller beings are generally very different from us in terms of what their bodies look like, which makes it more difficult to empathize with them, even disregarding the size difference. For instance, feeling empathy for a chimpanzee-sized insect or fish seems more challenging than feeling it for a chimpanzee. Third, the size difference itself seems likely to make us more biased against smaller beings as well. Compare the difficulty of feeling compassion for a normal-sized chimpanzee versus feeling it for an ant-sized chimpanzee. Or for a lobster versus an ant; the latter actually has more than twice as many neurons as the former.

Another distinct point I would make in relation to this objection is that the case against speciesism is very similar, in terms of its form, to the case against racism, and most people seem to accept the latter today, implying that there may be much ready potential we can tap into here. The argument against racism does not seem too intellectually advanced for most people, which provides an additional reason to question the intuitive assumption that the case against speciesism necessarily must be too advanced or abstract for most people to follow it (along with the non-peer-reviewed studies cited above that tentatively hint the same). More than that, the philosophical case against speciesism also happens to be exceptionally strong, much stronger than we animal advocates tend to realize — the literature that argues in favor of speciesism is surprisingly thin and weak — and I think we ignore this strength at our peril. We have a powerful tool at our disposal that we refuse to employ.

Anti-Speciesism Is Often Better than (Naive) Consequentialist Calculations

If one is a wannabe consequentialist rationalist, it is easy to be misguided about where much of our moral wisdom comes from, by imagining that we have gained it via clever deductive consequentialist analyses. Yet for the most part, this is not the case. Our rejection of racism today, for instance, is mostly due to cultural evolution, including lessons from history, that has accumulated gradually; it has not primarily been due to consequentialist arguments (to the extent arguments have played a crucial role, it seems to me that they have rather rested on consistency). As a result, we have now arrived upon a moral wisdom that is deeper, I believe, than what a simple chain of consequentialist reasoning could have readily produced prior to this cultural change (after all, how would you make a solid consequentialist case that human slavery is wrong? It is not easy. And if you can, would it apply equally to the property status of non-human individuals? If not, why?).

And I think the same applies to anti-speciesism: it tends to be wiser than naive consequentialist analyses. It provides us with a free download of the full package of the moral progress we have made over the last few centuries with respect to human individuals, ready for us apply to non-human individuals by simply using the heuristic “what would we do if they were human?” With this package installed, we can quickly gain wise views on many ethical issues pertaining to non-human beings, including “happy meat” and veganism — it provides a clear case against and for them respectively. One could be forced to spend a long time arguing for these conclusions otherwise, if one were to insist on employing directly consequentialist arguments, even though these conclusions arguably are what a complete consequentialist analysis would recommend (I believe Brian Tomasik would mostly disagree, although he would do so for complicated reasons).

New Information: Have We Updated Sufficiently?

Something I think we should be wary of is when we build up our views on a matter over a long period of time, and then encounter a new piece of crucial information that makes us change our outlook completely, yet without properly updating the aforementioned views we have built and consolidated over a long period of time in the absence of this crucial information.

To be more concrete: I think many animal advocates have spent a lot of time thinking hard about how to best advocate for non-human animals so as to reduce their suffering as much as possible. Unfortunately, what they have been thinking hard about has “merely” been what we should do in order to reduce the suffering of non-human beings exploited by humans, and they have then built up their preferred strategy for advocating for non-human individuals based on this outlook. A positive thing that has then happened for many of these advocates is that they have become convinced of the importance of wild animal suffering; this would be the “crucial piece of information” in the more general statement of this “updating problem” above.

This has changed the outlook of these advocates completely in some ways, yet it seems to me that their preferred strategy in terms of advocacy has remained suspiciously unchanged, which should give them pause. We have had our minds opened to this piece of information that changes everything: the vast majority of beings is not found in the realm we have been focusing on for all these years. Yet the ideal form of advocacy somehow remains exactly the same as before we came upon this information, advocacy that pertains exclusively to the beings we used to think were the only beings whom we owed any obligations.

This makes little sense, although one can say that, in one sense, it makes perfect sense. A view on a matter that one has built over many years is unlikely to be changed over night, especially if one has thought a lot about it. Yet this is a psychological explanation of the phenomenon in question; it is not an explanation that defends it as reasonable in any way.

In conclusion, I would encourage all animal advocates to reflect upon whether they have factored in the obligations we owe to non-human individuals in nature in their current view of the best advocacy strategies and tactics. As far as I can tell, virtually none of us have.

 

 

 

Blog at WordPress.com.

Up ↑