A virtue-based approach to reducing suffering given long-term cluelessness

This post is a follow-up to my previous essay on reducing suffering given long-term cluelessness. Long-term cluelessness is the idea that we have no clue which actions are likely to create better or worse consequences across the long-term future. In my previous post, I argued that even if we grant long-term cluelessness (a premise I remain skeptical of), we can still steer by purely consequentialist views that do not entail cluelessness and that can ground a focus on effective suffering reduction.

In this post, I will outline an alternative approach centered on virtues. I argue that even if we reject or find no guidance in any consequentialist view, we can still plausibly adopt a virtue-based approach to reducing suffering, including effective suffering reduction. Such an approach can help guide us independently of consequentialist uncertainty.


Contents

  1. What would a virtue-based approach entail?
  2. Justifications for a virtue-based approach
  3. A virtue-based approach to effective suffering reduction
  4. Conclusion

What would a virtue-based approach entail?

It can be difficult to say exactly what a virtue-based approach to reducing suffering would entail. Indeed, an absence of clear and simple rules, and responding wisely in conditions of ambiguity based on good practical judgment, are all typical features of virtue-based approaches in ethics.

That said, in the broadest terms, a virtue-based approach to suffering involves having morally appropriate attitudes, sentiments, thoughts, and behaviors toward suffering. It involves relating to suffering in the way that a morally virtuous person would relate to it.

Perhaps more straightforwardly, we can say what a virtue-based approach would definitely not involve. For example, it would obviously not involve extreme vices like sadism or cruelty, nor would it involve more common yet still serious vices like being indifferent or passive in the face of suffering.

However, a virtue-based approach would not merely involve the morally unambitious aim of avoiding serious vices. It would usually be much more ambitious than that, encouraging us to aim for moral excellence across all aspects of our character — having deep sympathy and compassion, striving to be proactively helpful, having high integrity, and so on.

In this way, a virtue-based approach may invert an intuitive assumption about the implications of cluelessness. That is, rather than seeing cluelessness as a devastating consideration that potentially opens the floodgates to immoral or insensitive behavior, we can instead see it as paving the way for a focus on moral excellence. After all, if no consequentialist reasons count against a strong focus on moral excellence under assumed cluelessness, then arguably the strongest objections against such a focus fall away. As a result, we might no longer have any plausible reason not to pursue moral excellence in our character and conduct. At a minimum, we would no longer have any convenient consequentialist-framed rationalizations for our vices.

Sure, we could retreat to simply being insensitive and disengaged in the face of suffering — or even retreat to much worse vices — but I will argue that those options are less plausible.

Justifications for a virtue-based approach

There are various possible justifications for the approach outlined above. For example, one justification might be that having excellent moral character simply reflects the kind of person we ideally want to be. For some of us, such a personal desire might in itself be a sufficient reason for adopting a virtue-based approach in some form.

Complementary justifications may derive from our moral intuitions. For instance, all else equal, we might find it intuitive that it is morally preferable to embody excellent moral character than to embody serious vices, or that it is more ethical to display basic moral virtues than to lack such virtues (see also Knutsson, 2023, sec. 7.4). (Note that this differs from the justification above in that we need not personally want to be virtuous in order to have the intuition that it is more ethical to be that way.)

We may also find some justification in contractualist considerations or considerations about what kind of society we would like to live in. For example, we may ideally want to live in a society in which people adhere to virtues of compassion and care for suffering, as well as virtues of effectiveness in reducing suffering (more on this in the next section). Under contractualist-style moral frameworks, favoring such a society would in turn give us moral reason to adhere to those virtues ourselves.

A virtue-based approach might likewise find support if we consider specific cases. For example, imagine that you are a powerful war general whose soldiers are committing heinous atrocities that you have the power to stop — with senseless torture occurring on a large scale that you can halt immediately. And imagine that, given your subjective beliefs, your otherwise favored moral views all fail to give any guidance in this situation (e.g. due to uncertainty about long-term consequences). In contrast, ending the torture would obviously be endorsed by any commonsense virtue-based stance, since that is simply what a virtuous, compassionate person would do regardless of long-term uncertainty. If we agree that ending the torture is the morally right response in a case like this, then this arguably lends some support to such a virtue-based stance (as well as to other moral stances that imply the same response).

In general terms, we may endorse a virtue-based approach partly because it provides an additional moral safety net that we can fall back on when other approaches fail. That is, even if we find it most plausible to rely on other views when these provide practical recommendations, we might still find it reasonable to rely on virtue-based approaches in case those other views fall silent. Having virtue ethics as such a supportive layer can help strengthen our foundation and robustness as moral agents.

(One could also attempt to justify a virtue-based approach by appealing to consequentialist reasoning. Indeed, it could be that promoting a non-consequentialist virtue-based stance would ultimately create better consequences than not doing so. For example, the absence of such a virtue-based stance might increase the risk of extremely harmful behavior among moral agents. However, such arguments would involve premises that are not the focus of this post.)

A virtue-based approach to effective suffering reduction

One might wonder whether a virtue-based approach can ground effective suffering reduction of any kind. That is, can a virtue-based approach ground systematic efforts to reduce suffering effectively with our limited resources? In short, yes. If one deems it virtuous to try to reduce suffering in systematic and effective ways (at least in certain decisions or domains), then a virtue-based approach could provide a moral foundation for such efforts.

For instance, if given a choice between saving 10 versus 1,000 chickens from being boiled alive, we may consider it more virtuous — more compassionate and principled — to save the 1,000, even if we had no idea whether that choice ultimately reduces more suffering across all time or across all consequences that we could potentially assess.

To take a more realistic example: in a choice between donating either to a random charity or to a charity with a strong track record of preventing suffering, we might consider it more virtuous to support the latter, even if we do not know the ultimate consequences.

How would such a virtue-based approach be different from a consequentialist approach? Broadly speaking, there can be two kinds of differences. First, a virtue-based approach might differ from a consequentialist one in terms of its practical implications. For instance, in the donation example above, a virtue-based approach might recommend that we donate to the charity with a track record of suffering prevention even if we are unable to say whether it reduces suffering across all time or across all consequences that we could potentially assess.

Second, even if a virtue-based view had all the same practical implications as some consequentialist view, there would still be a difference in the underlying normative grounding or basis of these respective views. The consequentialist view would be grounded purely in the value of consequences, whereas the virtue-based view would not be grounded purely in that (even if the disvalue of suffering may generally be regarded as the most important consideration). Instead, the virtue-based approach would (also) be grounded at least partly in the kind of person it is morally appropriate to be — the kind of person who embodies a principled and judicious compassion, among other virtues (see e.g. the opening summary in Hursthouse & Pettigrove, 2003).

In short, virtue-based views represent a distinctive way in which some version of effective suffering reduction can be grounded.

Conclusion

There are many possible moral foundations for reducing suffering (see e.g. Vinding, 2020, ch. 6; Knutsson & Vinding, 2024, sec. 2). Even if we find one particular foundation to be most plausible by far, we are not forced to rest absolutely everything on such a singular and potentially brittle basis. Instead, we can adopt many complementary foundations and approaches, including an approach centered on excellent moral character that can guide us when other frameworks might fail. I think that is a wiser approach.

Reducing suffering given long-term cluelessness

An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.”

There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the plausibility of this claim. Rather, my goal is to explore the implications if we assume that we are bound to be clueless about whether any given action overall reduces or increases suffering.

In other words, without taking a position on the conditional premise, what would be the practical implications if such cluelessness were unavoidable? Specifically, would this undermine the project of reducing suffering in effective ways? I will argue not. Even if we grant complete cluelessness and thus grant that certain moral views provide no practical recommendations, we can still reasonably give non-zero weight to other moral views that do provide practical recommendations. Indeed, we can find meaningful practical recommendations even if we hold a purely consequentialist view that is exclusively concerned with reducing suffering.


Contents

  1. A potential approach: Giving weight to scope-adjusted views
  2. Asymmetry in practical recommendations
  3. Toy models
  4. Justifications and motivations
    1. Why give weight to multiple views?
    2. Why give weight to a scope-adjusted view?
  5. Arguments I have not made
  6. Conclusion
  7. Acknowledgments

A potential approach: Giving weight to scope-adjusted views

There might be many ways to ground a reasonable focus on effective suffering reduction even if we assume complete cluelessness about long-term consequences. Here, I will merely outline one candidate option, or class of options, that strikes me as fairly reasonable.

As a way to introduce this approach, say that we fully accept consequentialism in some form (notwithstanding various arguments against being a pure consequentialist, e.g. Knutsson, 2023; Vinding, 2023). Yet despite being fully convinced of consequentialism, we are uncertain or divided about which version of consequentialism is most plausible.

In particular, while we give most weight to forms of consequentialism that entail no restrictions or discounts in its scope, we also give some weight to views that entail a more focused scope. (Note that this kind of approach need not be framed in terms of moral uncertainty, which is just one possible way to frame it. An alternative is to think in terms of degrees of acceptance or levels of agreement with these respective views, cf. Knutsson, 2023, sec. 6.6.)

To illustrate with some specific numbers, say that we give 95 percent credence to consequentialism without scope limitations or adjustments of any kind, and 5 percent credence to some form of scope-adjusted consequentialism. The latter view may be construed such that its scope roughly includes those consequences we can realistically estimate and influence without being clueless. This view is similar to what has been called “reasonable consequentialism”, the view that “an action is morally right if and only if it has the best reasonably expected consequences.” It is also similar to versions of consequentialism that are framed in terms of foreseeable or reasonably foreseeable consequences (Sinnott-Armstrong, 2003, sec. 4).

To be clear, the approach I am exploring here is not committed to any particular scope-adjusted view. The deeper point is simply that we can give non-zero weight to one or more scope-adjusted versions of consequentialism, or to scope-adjusted consequentialist components of a broader moral view. Exploring which scope-adjusted view or views might be most plausible is beyond the aims of this essay, and that question arguably warrants deeper exploration.

That being said, I will mostly focus on views centered on (something like) consequences we can realistically assess and be guided by, since something in this ballpark seems like a relatively plausible candidate for scope-adjustment. I acknowledge that there are significant challenges in clarifying the exact nature of this scope, which is likely to remain an open problem subject to continual refinement. After all, the scope of assessable consequences may grow as our knowledge and predictive power grow.

Asymmetry in practical recommendations

The relevance of the approach outlined above becomes apparent when we evaluate the practical recommendations of the clueless versus non-clueless views incorporated in this approach. A completely clueless consequentialist view would give us no recommendations about how to act, whereas a non-clueless scope-adjusted view would give us practical recommendations. (It would do so by construction if its scope includes those consequences we can realistically estimate and influence without being clueless.)

In other words, the resulting matrix of recommendations from those respective views is that the non-clueless view gives us substantive guidance, while the clueless view suggests no alternative and hence has nothing to add to those recommendations. Thus, if we hold something like the 95/5 combined consequentialist view described above — or indeed any non-zero split between these component views — it seems that we have reason to follow the non-clueless view, all things considered.

Toy models

To give a sense of what a scope-adjusted view might look like, we can consider a toy model with an exponential discount factor and an (otherwise) expected linear increase in population size:

The green area represents 99 percent of the total expected value we can influence under this view, implying that almost all the value we can meaningfully influence is found within the next 700 years.

We can also consider a model with a different discount factor and with cubic growth, reflecting the possibility of space expansion radiating from Earth:

On this model, virtually all the expected value we can meaningfully influence is found within the next 10,000 years. In both of the models above, we end up with a sort of de facto “medium-termism”.

Of course, one can vary the parameters in numerous ways and combine multiple models in ways that reflect more sophisticated views of, for example, expected future populations and discount factors. Views that involve temporal discounting allow for much greater variation than what is captured by the toy models above, including views that focus on much shorter or much longer timescales. Moreover, views that involve discounting need not be limited to temporal discounting in particular, or even be phrased in terms of temporal discounting at all. It is one way to incorporate discounting or scope-adjustments, but by no means the only one. 

Furthermore, if we give some plausibility to views that involve discounting of some kind, we need not be committed to a single view for every single domain. We may hold that the best view, or the view we give the greatest weight, will vary depending on the issue at hand (cf. Dancy, 2001; Knutsson, 2023, sec. 3). A reason for such variability may be that the scope of outcomes we can meaningfully predict often differs significantly across domains. For example, there is a stark difference in the predictability of weather systems versus planetary orbits, and similar differences in predictability might be found across various practical and policy-relevant domains.

Note also that a non-clueless scope-adjusted view need not be rigorously formalized; it could, for example, be phrased in terms of our all things considered assessments, which might be informed by myriad formal models, intuitions, considerations, and so on.

Justifications and motivations

What might justify or motivate the basic approach outlined above? This question can be broken into two sub-questions. First, why give weight to more than just a single moral view? Second, provided we give some weight to more than a single view, why give any weight to a scope-adjusted view concerned with consequences?

Why give weight to multiple views?

Reasons for giving weight to more than a single moral view or theory have been explored elsewhere (see e.g. Dancy, 2001; MacAskill et al., 2020, ch. 1; Knutsson, 2023; Vinding, 2023).

One of the reasons that have been given is that no single moral theory seems able to give satisfying answers to all moral questions (Dancy, 2001; Knutsson, 2023). And even if our preferred moral theory appears to be a plausible candidate for answering all moral questions, it is arguably still appropriate to have less than perfect confidence or acceptance in that theory (MacAskill et al., 2020, ch. 1; Vinding, 2023). Such moderation might be grounded in epistemic modesty and humility, a general skepticism toward fanaticism, and the prudence of diversifying one’s bets. It might also be grounded partly in the observation that other thoughtful people hold different moral views and that there is something to be said in favor of those views.

Likewise, giving exclusive weight to a single moral view might make us practically indifferent or paralyzed, whether it be due to cluelessness or due to underspecification as to what our preferred moral theory implies in some real-world situation. Critically, such practical indifference and paralysis may arise even in the face of the most extreme atrocities. If we find this to be an unreasonable practical implication, we arguably have reason not to give exclusive weight to a moral view that potentially implies such paralysis.

Finally, from a perspective that involves degrees of acceptance or agreement with moral views, a reason for giving weight to multiple views might simply be that those moral views each seem intuitively plausible or that we intuitively agree with them to some extent (cf. Knutsson, 2023, sec. 6.6).

Why give weight to a scope-adjusted view?

What reasons could be given for assigning weight to a scope-adjusted view in particular? One reason may be that it seems reasonable to be concerned with consequences to the extent that we can realistically estimate and be guided by them. That is arguably a sensible and intuitive scope for concern about consequences — or at least it appears sensible to some non-zero degree. If we hold this intuition, even if just to a small degree, it seems reasonable to have a final view in which we give some weight to a view focused on realistically assessable consequences (whatever the scope of those consequences ultimately turns out to be).

Some support may also be found in our moral assessments and stances toward local cases of suffering. For example, if we were confronted with an emergency situation in which some individuals were experiencing intense suffering in our immediate vicinity, and if we were readily able to alleviate this suffering, it would seem morally right to help these beings even if we cannot foresee the long-run consequences. (All theoretical and abstract talk aside, I suspect the vast majority of consequentialists would agree with that position in practice.)

Presumably, at least part of what would make such an intervention morally right is the badness of the suffering that we prevent by intervening. And if we hold that it is morally appropriate to intervene to reduce suffering in cases where we can immediately predict the consequences of doing so — namely that we alleviate the suffering right in front of us — it seems plausible to hold that this stance also generalizes to consequences that are less immediate. In other words, if this stance is sound in cases of immediate suffering prevention — or even if it just has some degree of soundness in such cases — it plausibly also has some degree of soundness when it comes to suffering prevention within a broader range of consequences that we can meaningfully estimate and influence.

This is also in line with the view that we have (at least somewhat) greater moral responsibility toward that which occurs within our local sphere of assessable influence. This view is related to, and may be justified in terms of, the “ought implies can” principle. After all, if we are bound to be clueless and unable to deliberately influence very long-run consequences, then, if we accept some version of the “ought implies can” principle, it seems that we cannot have any moral responsibility or moral duties to deliberately shape those long-run consequences — or at least such moral responsibility is plausibly diminished. In contrast, the “ought implies can” principle is perfectly consistent with moral responsibility within the scope of consequences that we realistically can estimate and deliberately influence in a meaningful way.

Thus, if we give some weight to an “ought implies can” conception of moral responsibility, this would seem to support the idea that we have (at least somewhat) greater moral responsibility toward that which occurs within our sphere of assessable influence. An alternative way to phrase it might be to say that our sphere of assessable influence is a special part of the universe for us, in that we are uniquely positioned to predict and steer events in that part compared to elsewhere, and this arguably gives us a (somewhat) special moral responsibility toward that part of the universe.

Another potential reason to give some weight to views centered on realistically assessable consequences, or more generally to views that entail discounting in some form, is that other sensible people endorse such views based on reasons that seem defensible to some degree. For example, it is common for economists to endorse models that involve temporal discounting, not just in descriptive models but also in prescriptive or normative models (see e.g. Arrow et al., 1996). The justifications for such discounting might be that our level of moral concern should be adjusted for uncertainty about whether there will be any future, uncertainty about our ability to deliberately influence the future, and the possibility that the future will be better able to take care of itself and its problems (relative to earlier problems that we could prioritize instead).

One might object that such reasons for discounting should be incorporated at a purely empirical level, without any discounting at the moral level, and I would largely agree with that sentiment. (Note that when applied at a strictly empirical or practical level, those reasons and adjustments are contenders as to how one might avoid paralysis without any discounting at the moral level.)

Yet even if we think such considerations should mostly or almost exclusively be applied at the empirical level, it might still be defensible to also invoke them to justify some measure of discounting directly at the level of one’s moral view and moral concerns, or at least as a tiny sub-component within one’s broader moral view. In other words, it might be defensible to allow empirical considerations of the kind listed above to inform and influence our fundamental moral values, at least to a small degree.

To be clear, it is not just some selection of economists who endorse normative discounting or scope-adjustment in some form. As noted above, it is also found among those who endorse “reasonable consequentialism” and consequentialism framed in terms of foreseeable consequences. And similar views can be found among people who seek to reduce suffering.

For example, Brian Tomasik has long endorsed a kind of split between reducing suffering effectively in the near term versus reducing suffering effectively across all time. In particular, regarding altruistic efforts and donations, he writes that “splitting is rational if you have more than one utility function”, and he devotes at least 40 percent of his resources toward short-term efforts to reduce suffering (Tomasik, 2015). Jesse Clifton seems to partially endorse a similar approach focused on reasons that we can realistically weigh up — an approach that in his view “probably implies restricting attention to near-term consequences” (see also Clifton, 2025). The views endorsed by Tomasik and Clifton explicitly give some degree of special weight to near-term or realistically assessable consequences, and these views and the judgments underlying them seem fairly defensible.

Lastly, it is worth emphasizing just how weak of a claim we are considering here. In particular, in the framework outlined above, all that is required for the simple practical asymmetry argument to go through is that we give any non-zero weight to a non-clueless view focused on realistically assessable consequences, or some other non-clueless view centered on consequences.

That is, we are not talking about accepting this as the most plausible view, or even as a moderately plausible view. Its role in the practical framework above is more that of a humble tiebreaker — a view that we can consult as an nth-best option if other views fail to give us guidance and if we give this kind of view just the slightest weight. And the totality of reasons listed here arguably justify that we grant it at least a tiny degree of plausibility or acceptance.

Arguments I have not made

One could argue that something akin to the approach outlined here would also be optimal for reducing suffering in expectation across all space and time. In particular, one could argue that such an unrestricted moral aim would in practice imply a focus on realistically assessable consequences. I am open to that argument — after all, it is difficult to see what else the recommended focus could be, to the extent there is one.

For similar reasons, one could argue that a practical focus on realistically assessable consequences represents a uniquely safe and reasonable bet from a consequentialist perspective: it is arguably the most plausible candidate for what a consequentialist view would recommend as a practical focus in any case, whether scope-adjusted or not. Thus, from our position of deep uncertainty — including uncertainty about whether we are bound to be clueless — it arguably makes convergent sense to try to estimate the furthest depths of assessable consequences and to seek to act on those estimates, at least to the extent that we are concerned with consequences.

Yet it is worth being clear that the argument I have made here does not rely on any of these claims or arguments. Indeed, it does not rely on any claims about what is optimal for reducing suffering across all space and time.

As suggested above, the conditional claim I have argued for here is ultimately a very weak one about giving minimal weight to what seems like a fairly moderate and in some ways commonsensical moral view or idea (e.g. it seems fairly commonsensical to be concerned with consequences to the extent that we can realistically estimate and be guided by them). The core argument presented in this essay does not require us to accept any controversial empirical positions.

Conclusion

For some of our problems, perhaps the best we can do is to find “second best solutions” — that is, solutions that do not satisfy all our preferred criteria, yet which are nevertheless better than any other realistic solution. This may also be true when it comes to reducing suffering in a potentially infinite universe. We might be in an unpredictable sea of infinite consequences that ripple outward forever (Schwitzgebel, 2024). But even if we are, this need not prevent us from trying to reduce suffering in effective and sensible ways within a realistic scope. After all, compared to simply giving up on trying to reduce suffering, it seems less arbitrary and more plausible to at least try to reduce suffering within the domain of consequences we can realistically assess and be guided by.

Acknowledgments

Thanks to Tobias Baumann, Jesse Clifton, and Simon Knutsson for helpful comments.

A convergence of moral motivations

My aim in this post is to outline a variety of motivations that all point me in broadly the same direction: toward helping others in general and prioritizing the reduction of suffering in particular.


Contents

  1. Why list these motivations?
  2. Clarification
  3. Compassion
  4. Consistency
  5. Common sense: A trivial sacrifice compared to what others might gain
  6. The horror of extreme suffering: The “game over” motivation
  7. Personal identity: I am them
  8. Fairness
  9. Status and recognition
  10. Final reflections

Why list these motivations?

There are a few reasons why I consider it worthwhile to list this variety of moral motivations. For one, I happen to find it interesting to notice that my motivations for helping others are so diverse in their nature. (That might sound like a brag, but note that I am not saying that my motivations are necessarily all that flattering or unselfish.) This diversity in motivations is not obvious a priori, and it also seems different from how moral motivations are often described. For example, reasons to help others are frequently described in terms of a singular motivation, such as compassion.

Beyond mere interest, there may also be some psychological and altruistic benefits to identifying these motivations. For instance, if we realize that our commitment to helping others rests on a wide variety of motivations, this might in turn give us a greater sense that it is a robust commitment that we can be confident in, as opposed to being some brittle commitment that rests on just a single wobbly motivation.

Relatedly, if we have a sense of confidence in our altruistic commitment, and if we are aware that it rests on a broad set of motivations, this might also help strengthen and maintain this commitment. For example, one can speculate that it may be possible to tap into extra reserves of altruistic motivation by skillfully shifting between different sources of such motivation.

Another potential benefit of becoming more aware of, and drawing on, a greater variety of altruistic motivations is that they may each trigger different cognitive styles with their own unique benefits. For example, the patterns of thought and attention that are induced by compassion are likely different from those that are induced by a sense of rigorous impartiality, and these respective patterns might well complement each other.

Lastly, being aware of our altruistic motivations could help give us greater insight into our biases. For example, if we are strongly motivated by empathic concern, we might be biased toward mostly helping cute-looking beings who appeal to our empathy circuits, like kittens and squirrels, and toward downplaying the interests of beings who may look less cute, such as lizards and cockroaches. And note that such a bias can persist even if we are also motivated by impartiality at some level. Indeed, it is a recipe for bias to think that a mere cerebral endorsement of impartiality means that we will thereby adhere to impartiality at every level of our cognition. A better awareness of our moral motivations may help us avoid such naive mistakes.

Clarification

I should clarify that this post is not meant to capture everyone’s moral motivations, nor is my aim to convince people to embrace all the motivations I outline below. Rather, my intention is first and foremost to present the moral motivations that I myself am compelled by, and which all to some extent drive me to try to reduce suffering. That being said, I do suspect that many of these motivations will tend to resonate with others as well.

Compassion

Compassion has been defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. This is similar to having empathic concern for others (compassion is often regarded as a component of empathic concern).

In contrast to some of the other motivations listed below, compassion is less cerebral and more directly felt as a motivation for helping others. For example, when we experience sympathy for someone’s misery, we hardly need to go through a sequence of inferences in order to be motivated to alleviate that misery. The motivation to help is almost baked into the sympathy itself. Indeed, studies suggest that empathic concern is a significant driver of costly altruism.

In my own case, I think compassion tends to play an important role, though I would not claim that it is sufficient or even necessary for motivating the general approach that I would endorse when it comes to helping others. One reason it is not sufficient is that it needs to be coupled with a more systematic component, which I would broadly refer to as ‘consistency’.

Consistency

As a motivation for helping others, consistency is rather different from compassion. For example, unlike compassion, consistency is cerebral in nature, to the degree that it almost has a logical or deductive character. That is, unlike compassion, consistency per se does not highlight others’ suffering or welfare from the outset. Instead, efforts to help others are more a consequence of applying consistency to our knowledge about our own direct experience: I know that intense suffering feels bad and is worth avoiding for me (all else equal), and hence, by consistency, I conclude that intense suffering feels bad and is worth avoiding for everyone (all else equal).

One might object that it is not inconsistent to view one’s own suffering as being different from the suffering of others, such as by arguing that there are relevant differences between one’s own suffering and the suffering of others. I think there are several points to discuss back and forth on this issue. However, I will not engage in such arguments here, since my aim in this section is not to defend consistency as a moral motivation, but simply to present a rough outline as to how consistency can motivate efforts to help others.

As noted above, a consistency-based motivation for helping others does not strictly require compassion. However, in psychological terms, since none of us are natural consistency-maximizers, it seems likely that compassion will usually be helpful for getting altruistic motivations off the ground in practice. Conversely, as hinted in the previous section, compassion alone is not sufficient for motivating the most effective actions for helping others. After all, one can have a strong desire to reduce suffering without having the consistency-based motivation to treat equal suffering equally and to spend one’s limited resources accordingly.

In short, the respective motivations of compassion and consistency seem to each have unique benefits that make them worth combining, and I would say that they are both core pillars in my own motivations for helping others.

Common sense: A trivial sacrifice compared to what others might gain

Another motivation that appeals to me might be described as a commonsense motivation. That is, there is a vast number of sentient beings in the world, of which I am just one, and hence the beneficial impact that I can have on other sentient beings is vastly greater than the beneficial impact I can have on my own life. After all, once my own basic needs are met, there is probably little I can do to improve my wellbeing much further. Indeed, I will likely find it more meaningful and fulfilling to try to help others than to try to improve my own happiness (cf. the paradox of hedonism and the psychological benefits of having a prosocial purpose).

Of course, it is difficult to quantify just how much greater our impact on others might be compared to our impact on ourselves. Yet given the enormous number of sentient beings who exist around us, and given that our impact potentially reaches far into the future, it is not unreasonable to think that it could be greater by at least a factor of a million (e.g. we may prevent at least million times as many instances of similarly bad suffering in expectation for others than for ourselves).

In light of this massive difference in potential impact, it feels like a no-brainer to dedicate a significant amount of resources toward helping others, especially when my own basic needs are already met. Not doing so would amount to giving several orders of magnitude greater importance to my own wellbeing than to the wellbeing of others, and I see no justification for that. Indeed, one need not endorse anything close to perfect consistency and impartiality to believe that such a massively skewed valuation is implausible. It is arguably just common sense.

The horror of extreme suffering: The “game over” motivation

A particularly strong motivation for me is the sheer horror of extreme suffering. I refer to this as the “game over” motivation because that is my reaction when I witness cases of extreme suffering: a clear sense that nothing is more important than the prevention of such extreme horrors. Game over.

One might argue that this motivation is not distinct from compassion and empathic concern in the broadest sense. And I would agree that it is a species of that broad category of motivations. But I also think there is something distinctive about this “game over” motivation compared to generic empathic concern. For example, the “game over” motivation seems meaningfully different from the motivation to help someone who is struggling in more ordinary ways. In fact, I think there is a sense in which our common circuitry of sympathetic relating practically breaks down when it comes to extreme suffering. The suffering becomes so extreme and unthinkable that our “sympathometer” crashes, and we in effect check out. This is another reason it seems accurate to describe it as a “game over” motivation.

Where the motivations listed above all serve to motivate efforts to help others in general, the motivation described in this section is more of a driver as to what, specifically, I consider the highest priority when it comes to helping others, namely to alleviate and prevent extreme suffering.

Personal identity: I am them

Another motivation derives from what may be called a universal view of personal identity, also known as open individualism. This view entails that all sentient beings are essentially different versions of you, and that there is no deep sense in which the future consciousness-moments of your future self (in the usual narrow sense) is more ‘you’ than the future consciousness-moments of other beings.

Again, I will not try to defend this view here, as opposed to just describing how it can motivate efforts to help others (for a defense, see e.g. Kolak, 2004; Leighton, 2011, ch. 7; Vinding, 2017).

I happen to accept this view of personal identity, and in my opinion it ultimately leaves no alternative but to work for the benefit of all sentient beings. In light of open individualism, it makes no more sense to endorse narrow egoism than to, say, only care about one’s own suffering on Tuesdays. Both equally amount to an arbitrary disregard of my own suffering from an open individualist perspective.

This is one of the ways in which my motivations for helping others are not necessarily all that flattering: on a psychological level, I often feel that I am selfishly trying to prevent future versions of myself from being tortured, virtually none of whom will share my name.

I would say that the “I am them” motivation is generally a strong driver for me, not in a way that changes any of the basic upshots derived from the other motivations, but in a way that reinforces them.

Fairness

Considerations and intuitions related to fairness are also motivating to me. For example, I am lucky to have been born in a relatively wealthy country, and not least to have been born as a human rather than as a tightly confined chicken in a factory farm or a preyed-upon mouse in the wild. There is no sense in which I personally deserve this luck over those who are born in conditions of extreme misery and destitution. Consequently, it is only fair that I “pay back” my relative luck by working to help those beings who were or will be much less lucky in terms of their birth conditions and the like.

I should note that this is not among my stronger or more salient motivations, but I still think it has significant appeal and that it plays some role for me.

Status and recognition

Lastly, I want to highlight the motivation that any cynic would rightly emphasize, namely to gain status and recognition. Helping others can be a way to gain recognition and esteem among our peers, and I am obviously also motivated by that.

There is quite a taboo around acknowledging this motive, but I think that is a mistake. It is simply a fact about the human mind that we want recognition, and this is not necessarily a problem in and of itself. It only becomes a problem if we allow our drive for status to corrupt our efforts to help others, which is no doubt a real risk. Yet we hardly reduce that risk by pretending that we are unaffected by these drives. On the contrary, openly admitting our status motives probably gives us a better chance of mitigating their potentially corrupting influence.

Moreover, while our status drives can impede our altruistic efforts, we should not overlook the possibility that they might sometimes do the opposite, namely improve our efforts to help others.

How could that realistically happen? One way it might happen is by forcing us to seek out the assessments of informed people. That is, if our altruistic efforts are partly driven by a motive to impress relevant experts and evaluators of our work, we might be more motivated to consider and integrate a wider range of informed perspectives (compared to if we were not motivated to impress such evaluators).

Of course, this only works if we are indeed motivated to impress an informed audience, as opposed to just any audience that may be eager to throw recognition after us. Seeking the right audience to impress — those who are impressed by genuinely helpful contributions — might thus be key to making our status drives work in favor of our altruistic efforts rather than against them (cf. Hanson, 2010; 2018). 

Another reason to believe that status drives can be helpful is that they have proven to be psychologically potent for human beings. Hence, if we could hypothetically rob a human brain of its status drives, we might well reduce its altruistic drives overall, even if other sources of altruistic motivation were kept intact. It might be tantamount to removing a critical part of an engine, or at least a part that adds a significant boost.

In terms of my own motivations, I would say that drives for status probably often do help motivate my altruistic efforts, whether I endorse my status drives or not. Yet it is difficult to estimate the strength and influence of these drives. After all, the status motive is regarded as unflattering, and hence there are reasons to think that my mind systematically downplays its influence. Moreover, like all of the motivations listed here, the status motive likely varies in strength depending on contextual factors, such as whether I am around other people or not; I suspect that it becomes weaker when I am more isolated, which in effect suggests a way to reduce my status drives when needed.

I should also note that I aspire to view my status drives with extreme suspicion. Despite my claims about how status drives could potentially be helpful, I think the default — if we do not make an intense effort to hone and properly direct our status drives — is that they distort our efforts to help others. And I think the endeavor of questioning our status drives tends to be extremely difficult, not least since status-seeking behavior can take myriad forms that do not look or feel anything like status-seeking behavior. It might just look like “conforming to the obviously reasonable views of my peers”, or like “pursuing this obscure and interesting idea that somehow feels very important”.

So a key question I try to ask myself is: am I really trying to help sentient beings, or am I mostly trying to raise my personal status? And I strive to look at my professed answers with skepticism. Fortunately, I feel that the “I am them” motivation can be a powerful tool in this regard. It essentially forces the selfish parts of my mind to ask: do I really want to gain status more than I want to prevent my future self from being tortured? If not, then I have strong reasons to try to reduce any torture-increasing inefficiencies that might be introduced by my status motives, and to try, if possible, to harness my status motives in the direction of reducing my future torment.

Final reflections

The motivations described above make up quite a complicated mix, from other-oriented compassion and fairness to what feels more like a self-oriented motivation aimed at sparing myself (in an expansive sense) from extreme suffering. I find it striking just how diverse these motivations are, and how they nonetheless — from so seemingly different starting points — can end up converging toward roughly the same goal: to reduce suffering for all sentient beings.

For me, this convergence makes the motivation to help others feel akin to a rope that is weaved from many complementary materials: even if one of the strings is occasionally weakened, the others can usually still hold the rope together.

But again, it is worth stressing that the drive for status is somewhat of an exception, in that it takes serious effort to make this drive converge toward aims that truly help other sentient beings. More generally, I think it is important to never be complacent about the potential for our status drives to corrupt our motivations to help others, even if we feel like we are driven by a strong and diverse set of altruistic motivations. Status drives are like the One Ring: powerful yet easily corrupting, and they are probably best viewed as such.

Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


Contents

  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part; we find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

Beware underestimating the probability of very bad outcomes: Historical examples against future optimism

It may be tempting to view history through a progressive lens that sees humanity as climbing toward ever greater moral progress and wisdom. As the famous quote popularized by Martin Luther King Jr. goes: “The arc of the moral universe is long, but it bends toward justice.”

Yet while we may hope that this is true, and do our best to increase the probability that it will be, we should also keep in mind that there are reasons to doubt this optimistic narrative. For some, the recent rise of right-wing populism is a salient reason to be less confident about humanity’s supposed path toward ever more compassionate and universal values. But it seems that we find even stronger reasons to be skeptical if we look further back in history. My aim in this post is to present a few historical examples that in my view speak against confident optimism regarding humanity’s future.


Contents

  1. Germany in year 1900
  2. Shantideva around year 700
  3. Lewis Gompertz and J. Howard Moore in the 19th century

Germany in year 1900

In 1900, Germany was far from being a paragon of moral advancement. They were a colonial power, antisemitism was widespread, and bigoted anti-Polish Germanisation policies were in effect. Yet Germany anno 1900 was nevertheless far from being like Germany anno 1939-1945, in which it was the main aggressor in the deadliest war in history and the perpetrator of the largest genocide in history.

In other words, Germany had undergone an extreme case of moral regress along various dimensions by 1942 (the year the so-called Final Solution was formulated and approved by the Nazi leadership) compared to 1900. And this development was not easy to predict in advance. Indeed, for historian of antisemitism Shulamit Volkov, a key question regarding the Holocaust is: “Why was it so hard to see the approaching disaster?”

If one had told the average German citizen in 1900 about the atrocities that their country would perpetrate four decades later, would they have believed it? What probability would they have assigned to the possibility that their country would commit atrocities on such a massive scale? I suspect it would be very low. They might not have seen more reason to expect such moral regress than we do today when we think of our future.

A lesson that we can draw from Germany’s past moral deterioration is, to paraphrase Volkov’s question, that approaching disasters can be hard to see in advance. And this lesson suggests that we should not be too confident as to whether we ourselves might currently be headed toward disasters that are difficult to see in advance.

Shantideva around year 700

Shantideva was a Buddhist monk who lived in ca. 685-763. He is best known as the author of A Guide to the Bodhisattva’s Way of Life, which is a remarkable text for its time. The core message is one of profound compassion for all sentient beings, and Shantideva not only describes such universally compassionate ideals, but he also presents stirring encouragements and cogent reasoning in favor of acting on those ideals.

That such a universally compassionate text existed at such an early time is a deeply encouraging fact in one sense. Yet in another sense, it is deeply discouraging. That is, when we think about all the suffering, wars, and atrocities that humanity has caused since Shantideva expounded these ideals — centuries upon centuries of brutal violence and torment imposed upon human and non-human beings — it seems that a certain pessimistic viewpoint gains support.

In particular, it seems that we should be pessimistic about notions along the lines of “compassionate ideals presented in a compelling way will eventually create a benevolent world”. After all, even today, 1300 years later, where we generally pride ourselves of being far more civilized and morally developed than our ancestors, we are still painfully far from observing the most basic of compassionate ideals in relation to other sentient beings.

Of course, one might think that the problem is merely that people have yet to be exposed to compassionate ideals such as those of Shantideva — or those of Mahavira or Mozi, both of whom lived more than a thousand years before Shantideva. But even if we grant that this is the main problem, it still seems that historical cases like these give us some reason to doubt whether most people ever will be exposed to such compassionate ideals, or whether most people would accept such ideals upon being exposed to them, let alone be willing to act on them. The fact that these memes have not caught on to a greater degree than they have, despite existing in such developed forms a long time ago, is some evidence that they are not nearly as virulent as many of us would have hoped.

Speaking for myself at least, I can say that I used to think that people just needed to be exposed to certain compassionate ideals and compassion-based arguments, and then they would change their minds and behaviors due to the sheer compelling nature of these ideals and arguments. But my experience over the years, e.g. with animal advocacy, have made me far more pessimistic about the force of such arguments. And the limited influence of sophisticated expositions of these ideals and arguments made many centuries ago is further evidence for that pessimism (relative to my previous expectations).

Of course, this is not to say that we can necessarily do better than to promote compassion-based ideals and arguments. It is merely to say that the best we can do might be a lot less significant — or be less likely to succeed — than what many of us had initially expected.

Lewis Gompertz and J. Howard Moore in the 19th century

Lewis Gompertz (ca. 1784-1861) and J. Howard Moore (1862-1916) both have a lot in common with Shantideva, as they likewise wrote about compassionate ethics relating to all sentient beings. (And all three of them touched on wild-animal suffering.) Yet Gompertz and Moore, along with other figures in the 19th century, wrote more explicitly about animal rights and moral vegetarianism than did Shantideva. Two observations seem noteworthy with regard to these writings.

One is that Gompertz and Moore both wrote about these topics before the rise of factory farming. That is, even though authors such as Gompertz and Moore made strong arguments against exploiting and killing other animals in the 19th century, humanity still went on to exploit and kill beings on a far greater scale than ever before in the 20th century, indeed on a scale that is still increasing today.

This may be a lesson for those who are working to reduce risks of astronomical suffering at present: even if you make convincing arguments against a moral atrocity that humanity is committing or otherwise heading toward, and even if you make these arguments at an early stage where the atrocity has yet to (fully) develop, this might still not be enough to prevent it from happening on a continuously expanding scale.

The second and closely related observation is that Gompertz and Moore both seem to have focused exclusively on animal exploitation as it existed in their own times. They did not appear to focus on preventing the problem from getting worse, even though one could argue, in hindsight, that such a strategy might have been more helpful overall.

Indeed, even though Moore’s outlook was quite pessimistic, he still seems to have been rather optimistic about the future. For instance, in the preface to his book The Universal Kinship (1906), he wrote: “The time will come when the sentiments of these pages will not be hailed by two or three, and ridiculed or ignored by the rest; they will represent Public Opinion and Law.”

Gompertz appeared similarly optimistic about the future, as he in his Moral Inquiries (1824, p. 48) wrote: “though I cannot conceive how any person can shut his eyes to the general state of misery throughout the universe, I still think that it is for a wise purpose; that the evils of life, which could not properly be otherwise, will in the course of time be rectified …” Neither Gompertz nor Moore seem to have predicted that animal exploitation would be getting far worse in many ways (e.g. the horrible conditions of factory farms) or that it would increase vastly in scale.

This second observation might likewise carry lessons for animal activists and suffering reducers today. If these leading figures of 19th-century animal activism tacitly underestimated the risk that things might get far worse in the future, and as a result paid insufficient attention to such risks, could it be the case that most activists today are similarly underestimating and underprioritizing future risks of things getting even worse still? This question is at least worth pondering.

On a general and concluding note, it seems important to be aware of our tendencies to entertain wishful thinking and to be under the spell of the illusion of control. Just because a group of people have embraced some broadly compassionate values, and in turn identified ongoing atrocities and future risks based on those values, it does not mean that those people will be able to steer humanity’s future such that we avoid these atrocities and risks. The sad reality is that universally compassionate values are far from being in charge.

Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies

Our uncertainty about how the future will unfold is vast, especially on long timescales. In light of this uncertainty, it may be natural to think that our uncertainty about strategies must be equally vast and intractable. My aim in this brief post is to argue that this is not the case.


Contents

  1. Analogies to games, competitions, and projects
  2. Disanalogy in scope?
  3. Three robust strategies for reducing suffering
  4. The golden middle way: Avoiding overconfidence and passivity

Analogies to games, competitions, and projects

Perhaps the most intuitive way to see that vast outcome uncertainty need not imply vast strategic uncertainty is to consider games by analogy. Take chess as an example. It allows a staggering number of possible outcomes on the board, and chess players generally have great uncertainty about how a game of chess will unfold, even as they can make some informed predictions (similar to how we can make informed predictions in the real world).

Yet despite the great outcome uncertainty, there are still many strategies and rules of thumb that are robustly beneficial for increasing one’s chances of winning a game of chess. A trivially obvious one is to not lose pieces without good reason, yet seasoned chess players will know a long list of more advanced strategies and heuristics that tend to be beneficial in many different scenarios. (For an example of such a list, see e.g. here.)

Of course, chess is by no means the only example. Across a wide range of board games and video games, the same basic pattern is found: despite vast uncertainty about specific outcomes, there are clear heuristics and strategies that are robustly beneficial.

Indeed, this holds true in virtually any sphere of competition. Politicians cannot predict exactly how an election campaign will unfold, yet they can usually still identify helpful campaign strategies; athletes cannot predict how a given match will develop, yet they can still be reasonably confident about what constitutes good moves and game plans; companies cannot predict market dynamics in detail, yet they can still identify many objectives that would help them beat the competition (e.g. hire the best people and ensure high customer satisfaction).

The point also applies beyond the realm of competition. For instance, when engineers set out to build a big project, there are usually many uncertainties as to how the construction process is going to unfold and what challenges might come up. Yet they are generally still able to identify strategies that can address unforeseen challenges and get the job done. The same goes for just about any project, including cooperative projects between parties with different aims: detailed outcomes are exceedingly difficult to predict, yet it is generally (more) feasible to identify beneficial strategies.

Disanalogy in scope?

One might object that the examples above all involve rather narrow aims, and those aims differ greatly from impartial aims that relate to the interests of all sentient beings. This is a fair point, yet I do not think it undermines these analogies or the core point that they support.

Granted, when we move from narrower to broader aims and endeavors, our uncertainty about the relevant outcomes will tend to increase — e.g. when our aims involve far more beings and far greater spans of time. And when the outcome space and its associated uncertainty increases, we should also expect our strategic uncertainty to become greater. Yet it plausibly still holds true that we can identify at least some reasonably robust strategies, despite the increase in uncertainty that is associated with impartial aims. At the minimum, it seems plausible that our strategic uncertainty is still smaller than our outcome uncertainty. After all, if such a pattern of lower strategic uncertainty holds true of a wide range of endeavors on a smaller scale, it seems reasonable to expect that it will apply on larger scales too.

Besides, it appears that at least some of the examples mentioned in the previous section would still stand even if we greatly increased their scale. For example, in the case of many video games, it seems that we could increase the scale of the game by an arbitrary amount without meaningfully changing the most promising strategies — e.g. accumulate resources, gain more insights, strengthen your position. And similar strategies are plausibly quite robust relative to many goals in the real world as well, on virtually any scale.

Three robust strategies for reducing suffering

If we grant that we can identify some strategies that are robustly beneficial from an impartial perspective, this naturally raises the question as to what these strategies might be. The following are three examples of strategies for reducing suffering that seem especially robust and promising to me. (This is by no means an exhaustive list.)

  • Movement and capacity building: Expand the movement of people who strive to reduce suffering, and build a healthy and sustainable culture around this movement. Capacity building also includes efforts to increase the insights and resources available to the movement.
  • Promote concern for suffering: Increase the level of priority that people devote to the prevention of suffering, and increase the amount of resources that society devotes to its alleviation.
  • Promote cooperation: Increase society’s ability and willingness to engage in cooperative dialogues and positive-sum compromises that can help steer us away from bad outcomes.

The golden middle way: Avoiding overconfidence and passivity

To be clear, I do not mean to invite complacency about the risk that some apparently promising strategies could prove harmful. But I think it is worth keeping in mind that, just as there are costs associated with overconfidence, there are also costs associated with being too uncertain and too hesitant to act on the strategies that seem most promising. All in all, I think we have good reasons to pursue strategies such as those listed above, while still keeping in mind that we do face great strategic uncertainty.

The dismal dismissal of suffering-focused views

Ethical views that give a foremost priority to the reduction of suffering are often dismissed out of hand. More than that, it is quite common to see such views discussed in highly uncharitable ways, and to even see them described with pejorative terms.

My aim in this post is to call attention to this phenomenon, as I believe it can distort public discourse and individual thinking about the issue. That is, if certain influential people consistently dismiss certain views without proper argumentation, and in some cases even use disparaging terms to describe such views, then this is likely to bias people’s evaluations of these views. After all, most people will likely feel some social pressure not to endorse views that their intellectual peers call “crazy” or “monstrously toxic”. (See also what Simon Knutsson writes about social mechanisms that may suppress talk about, and endorsements of, suffering-focused views.)

Many of the examples I present below are not necessarily that significant on their own, but I think the general pattern that I describe is quite problematic. Some of the examples involve derogatory descriptions, while others involve strawman arguments and uncharitable rejections of suffering-focused views that fail to engage with the most basic arguments in favor of such views.

My overall recommendation is simply to meet suffering-focused views with charitable arguments rather than with strawman argumentation or insults — i.e. to live up to the standards that are commonly accepted in other realms of intellectual discourse.


Contents

  1. “Crazy” and “transparently silly” views
  2. Lazari-Radek and Singer’s cursory rejection
  3. “Arguably too nihilistic and divorced from humane values to be worth taking seriously”
  4. “Anti-natalism is neurotic self-hatred”
  5. More examples
  6. Conclusion

“Crazy” and “transparently silly” views

In his essay “Why I’m Not a Negative Utilitarian” (2013), Toby Ord writes that “you would have to be crazy” to choose a world with beings who experience unproblematic states over a world with beings who experience pure happiness (strict negative utilitarianism would be indifferent between the two, and according to some versions of negative utilitarianism, unproblematic mental states and pure happiness are the same thing, cf. Sherman, 2017; Knutsson, 2022).

Ord also writes that the view that happiness does not contribute to a person’s wellbeing independently of its effects on reducing problematic states is a “crazy view”, without engaging with any of the arguments that have been made in favor of the class of views that he is thereby dismissing — i.e. views according to which wellbeing consists in the absence of problematic states or frustrated desires (see e.g. Schopenhauer, 18191851; Fehige, 1998; O’Keefe, 2009, ch. 12).

These may not seem like particularly problematic claims, yet I believe that Ord would consider it poor form if similar claims were made about his preferred view — for example, if someone claimed that “you would have to be crazy to choose to create arbitrarily large amounts of extreme suffering in order to create a ‘sufficient’ amount of pleasure” (cf. the Very Repugnant Conclusion; Creating Hell to Please the Blissful; and Intense Bliss with Hellish Cessation). 

Similarly, Rob Bensinger writes that negative utilitarianism is “transparently false/silly”. Bensinger provides a brief justification for his claim that I myself and others find unconvincing, and it is in any case not a justification that warrants calling negative utilitarianism “transparently false/silly”.

Lazari-Radek and Singer’s cursory rejection

In their book The Point of View of the Universe, Lazari-Radek and Singer seek to defend the classical utilitarian view of Henry Sigdwick. It would be natural, in this context, to provide an elaborate discussion of the moral symmetry between happiness and suffering that is entailed by classical utilitarianism — after all, such a moral symmetry has been rejected by various philosophers in a variety of ways, and it is arguably one of the most controversial features of classical utilitarianism (cf. Mayerfeld, 1996, p. 335).

Yet Lazari-Radek and Singer barely broach the issue at all. The only thing that comes close is a single page worth of commentary on the views of David Benatar, which unfortunately amounts to a misrepresentation of Benatar’s views. Lazari-Radek and Singer claim that Benatar argues that “to have a desire for something is to be in a negative state” (p. 362). To my knowledge, this is not a claim that Benatar defends, and the claim is at any rate not critical to the main procreative asymmetry that he argues for (Benatar, 2006, ch. 2).

Lazari-Radek and Singer briefly rebut the claim about desires that they (I suspect wrongly) attribute to Benatar, by which they fail to address Benatar’s core views in any meaningful way. They then proceed to write the following, which as far as I can tell is the closest they get to a defense of a moral symmetry between happiness and suffering in their entire book: “for people who are able to satisfy the basic necessities of life and who are not suffering from depression or chronic pain, life can reasonably be judged positively” (pp. 362-363).

This is, of course, not much of a defense of a moral symmetry. First of all, no arguments are provided in defense of the claim that such lives “can reasonably be judged positively” (a claim that one can reasonably dispute). Second, even if we grant that certain lives “can be judged positively” (in terms of the intrinsic value of their contents), it still does not follow that such lives that are “judged positively” can also morally outweigh the most horrific lives. This is an all-important issue for the classical utilitarian to address, and yet Lazari-Radek and Singer proceed as though their claim that “life can reasonably be judged positively” also applies to the world as a whole, even when we factor in all of its most horrific lives. Put briefly, Lazari-Radek and Singer’s cursory rejection of asymmetric and suffering-focused views is highly unsatisfactory.

(In a vein similar to the dismissive remarks covered in the previous section, Lazari-Radek and Singer also later write that “any sane person will agree” that a scenario in which 100 percent of humanity dies is worse than a scenario in which 99 percent of humanity dies, cf. p. 375. Regardless of the plausibility of that claim — which one might agree with even from a purely suffering-focused perspective — it is bad form to imply that people are not sane if they disagree with it, not least since the latter scenario could well involve far more suffering overall. Likewise, in a response to a question on Reddit, Singer dismisses negative utilitarianism as “hopeless” without providing any reasons as to why.)

“Arguably too nihilistic and divorced from humane values to be worth taking seriously”

The website utilitarianism.net is co-authored by William MacAskill, Richard Yetter Chappell, and Darius Meissner. The aim of the website is to provide “a textbook introduction to utilitarianism at the undergraduate level”, and it is endorsed by Peter Singer (among others), who blurbs it as “the place to go for clear, full and fair accounts of what utilitarianism is, the arguments for it, the main objections to it, special issues like population ethics, and what living as a utilitarian involves.”

Yet the discussion found on the website is sorely lacking when it comes to fundamental questions and objections concerning the relative importance of suffering versus happiness. In particular, like Lazari-Radek and Singer’s Point of View of the Universe, the website contains no discussion of the moral symmetry between suffering and happiness that is entailed by classical utilitarianism, despite it being among the most disputed features of that view (see e.g. Popper, 1945; Mayerfeld, 19961999; Wolf, 199619972004; O’Keefe, 2009; Knutsson, 2016; Mathison, 2018; Vinding, 2020).

Similarly, the discussion of population ethics found on the website is extremely one-sided and uncharitable in its discussion of suffering-focused and asymmetric views in population ethics, especially for a text that is supposed to serve as an introductory textbook.

For instance, they write the following in a critique of the Asymmetry in population ethics (the Asymmetry is roughly the idea that it is bad to bring miserable lives into the world but not good to bring happy lives into the world):

But this brings us to a deeper problem with the procreative asymmetry, which is that it has trouble accounting for the idea that we should be positively glad that the world (with all its worthwhile lives) exists

There is much to take issue with in this sentence. First, it presents the idea that “we should be positively glad that the world exists” as though it is an obvious and supremely plausible idea; yet it is by no means obvious, and it has been questioned by many philosophers. A truly “full and fair” introductory textbook would have included references to such counter-perspectives. Indeed, the authors of utilitarianism.net call it a “perverse conclusion” that an empty world would be better than a populated one, without mentioning any of the sources that have defended that “perverse conclusion”, and without engaging with the arguments that have been made in its favor (e.g. Schopenhauer, 18191851; Benatar, 19972006; Fehige, 1998; Breyer, 2015; Gloor, 2017; St. Jules, 2019; Frick, 2020; Ajantaival, 2021/2022). Again, this falls short of what one would expect from a “full and fair” introductory textbook.

Second, the quote above may be critiqued for bringing in confounding intuitions, such as intuitions about the value of the world as a whole, which is in many ways a different issue from the question of whether it can be good to add new beings to the world for the sake of these beings themselves.

Third, the notion of “worthwhile lives” is not necessarily inconsistent with a procreative asymmetry, since lives may be deemed worthwhile in the sense that their continuation is preferable even if their creation is not (cf. Benatar, 19972006; Fehige, 1998; St. Jules, 2019; Frick, 2020). Additionally, one can think that a life is worthwhile — both in terms of its continuation and creation — because it has beneficial effects for others, even if it can never be better for the created individual themself that they come into existence.

The authors go on to write:

when thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

This quote effectively dismisses all of the views cited above — the views of Schopenhauer, Fehige, Benatar, and Frick, as well as the Nirodha View in the Pali Buddhist tradition — in one fell swoop by claiming that they are “arguably too nihilistic and divorced from humane values to be worth taking seriously”. That is, to put it briefly, a lazy treatment that again falls short of the minimal standards of a fair introductory textbook.

After all, classical utilitarians would probably also object if a textbook introduction were to effectively dismiss classical utilitarianism (and similar views) with the one-line claim that “views that allow the creation of lives full of extreme suffering in order to create pleasure for others are arguably too divorced from humane values to be worth taking seriously.” Yet the dismissal is just as unhelpful and uncharitable when made in the other direction. 

Finally, the authors also omit any mention of the Very Repugnant Conclusion, although one of the co-authors, William MacAskill, has stated that he considers it the strongest objection against his favored version of utilitarianism. It is arguably bad form to omit any discussion — or even a mention — of what one considers the strongest objection against one’s favored view, especially if one is trying to write a fair and balanced introductory textbook that features that view prominently.

“Anti-natalism is neurotic self-hatred”

Psychologist Geoffrey Miller has given several talks about effective altruism, including one at EA Global, and he has also taught a full university course on the psychology of effective altruism. At the time of writing, Miller has more than 120,000 followers on Twitter, which makes him one of the most widely followed people associated with effective altruism, with more followers than Peter Singer.

Having such a large audience arguably raises one’s responsibility to communicate in an intellectually honest and charitable manner. Yet Miller has repeatedly misrepresented the views of David Benatar and written highly uncharitable statements about antinatalism and negative utilitarianism, without seriously engaging with the arguments made in favor of these views.

For example, Miller has written on Twitter that “anti-natalism is neurotic self-hatred”, and he has on several occasions falsely implied that David Benatar is a negative utilitarian, such as when he writes that “[Benatar’s] negative utilitarianism assumes that only suffering counts, & pleasure can never offset it”; or when he writes that “Benatar’s view boils down to the claim that all the joy, beauty, & love in the world can’t offset even a drop of suffering in any organism anywhere. It’s a monstrously toxic & nihilistic philosophy.”

Yet the views that Miller attributes to Benatar are not views that Benatar in fact defends, and anyone familiar with Benatar’s position knows that he does not think that “only suffering counts” (cf. his rejection of the Epicurean view of death, Benatar, 2006, ch. 7).

Miller also betrays a failure to understand Benatar’s view when he writes:

The asymmetry thesis is empirically false for humans. Almost all people report net positive subjective well-being in hundreds of studies around the world. Benatar is basically patronizing everyone, saying ‘All you guys are wrong; you’re actually miserable’.

First, Benatar discusses various reasons as to why self-assessments of one’s quality of life may be unreliable (Benatar, 2006, pp. 64-69; see also Vinding, 2018). This is not fundamentally different from, say, evolutionary psychologists who argue that people’s self-reported motives may be wrong. Second, and more importantly, the main asymmetry that Benatar defends is not an empirical one, but rather an evaluative asymmetry between the presence and absence of goods versus the presence and absence of bads (Benatar, 2006, ch. 2). This evaluative asymmetry is not addressed by Miller’s claim above.

One might object that Miller’s statements have all been made on Twitter, and that tweets should generally be held to a lower standard than other forms of writing. Yet even if we grant that tweets should be held to a lower standard, we should still be clear that Miller blatantly misrepresents Benatar’s views, which is bad form on any platform and by any standard.

Moreover, one could argue that tweets should in some sense be held to a higher standard, since tweets are likely to be seen by more people compared to many other forms of writing (such as the average journal article), and perhaps also by readers who are less inclined to verify scholarly claims made by a university professor (compared to readers of other media).

More examples

Additional examples of uncharitable dismissals of suffering-focused views include statements from:

  • Writer and EA Global speaker Riva-Melissa Tez, who wrote that “anti-natalism and negative utilitarianism is true ‘hate speech’”.
  • YouTuber Robert Miles (>100k subscribers), who wrote: “Looks like it’s time for another round of ‘Principled Negative Utilitarianism or Undiagnosed Major Depressive Disorder?’” (See also here.)
  • Daniel Faggella, who wrote: “If I didn’t know so many negative utilitarians who I liked as people, I’d call it a position of literal cowardice – even vice.” (The original post was even stronger in its tone: “If I didn’t know and respect so many negative utilitarians, I would openly call it a vice, and a position of childish, seething cowardice.”)
    • I find the remark about cowardice to be quite strange, as it seems to me that it takes a lot of courage to face up to the horror of suffering, and to set out to alleviate suffering with determination. And socially, too, it can take a lot of courage to embrace strongly suffering-focused views in a social environment that often ridicules such views, and which often insinuates that there is something wrong with the adherents of these views.
  • R. N. Smart, who wrote that negative utilitarianism allows “certain absurd and even wicked moral judgments”, without providing any arguments as to whether competing moral views imply less “absurd or wicked” moral judgments, and without mentioning that classical utilitarianism — which Smart seems to express greater approval toward — has similar and arguably worse theoretical implications (cf. Knutsson, 2021; Ajantaival, 2022).

The following anecdotal example illustrates how uncharitable remarks can influence people’s motivations and make people feel unwelcome in certain communities: An acquaintance of mine who took part in an EA intro fellowship heard a fellow participant dismiss antinatalism quite uncharitably, saying something along the lines of “antinatalism is like high school atheism, but edgier”. My acquaintance thought that antinatalism is a plausible view, and the remark left them feeling unwelcome and discouraged from engaging further with effective altruism.

Conclusion

To be clear, my point is by no means that people should refrain from criticizing suffering-focused views, even in strong terms. My recommendation is simply that critics should strive to be even-handed, and to not misrepresent or unfairly malign views with which they disagree.

If we are trying to think straight about ethics, we should be keen not to let uncharitable claims and social pressures distort our thinking, especially since these factors tend to influence our views in hidden ways. After all, few people consciously think — let alone say — that social pressure exerts a strong influence on their views. Yet it is likely a potent factor all the same.

Priorities for reducing suffering: Reasons not to prioritize the Abolitionist Project

I discussed David Pearce’s Abolitionist Project in Chapter 13 of my book on Suffering-Focused Ethics. The chapter is somewhat brief and dense, and its main points could admittedly have been elaborated further and explained more clearly. This post seeks to explore and further explain some of these points.


A good place to start might be to highlight some of the key points of agreement between David Pearce and myself.

  • First and most important, we both agree that minimizing suffering should be our overriding moral aim.
  • Second, we both agree that we have reason to be skeptical about the possibility of digital sentience — and at the very least to not treat it as a foregone conclusion — which I note from the outset to flag that views on digital sentience are unlikely to account for the key differences in our respective views on how to best reduce suffering.
  • Third, we agree that humanity should ideally use biotechnology to abolish suffering throughout the living world, provided this is indeed the best way to minimize suffering.

The following is a summary of some of the main points I made about the Abolitionist Project in my book. There are four main points I would emphasize, none of which are particularly original (at least two of them are made in Brian Tomasik’s Why I Don’t Focus on the Hedonistic Imperative).

I.

Some studies suggest that people who have suffered tend to become more empathetic. This obviously does not imply that the Abolitionist Project is infeasible, but it does give us reason to doubt that abolishing the capacity to suffer in humans should be among our main priorities at this point.

To clarify, this is not a point about what we should do in the ideal, but more a point about where we should currently invest our limited resources, on the margin, to best reduce suffering. If we were to focus on interventions at the level of gene editing, other traits (than our capacity to suffer) seem more promising to focus on, such as increasing dispositions toward compassion. And yet interventions focused on gene editing may themselves not be among the most promising things to focus on in the first place, which leads to the next point.

II.

For even if we grant that the Abolitionist Project should be our chief aim, at least in the medium term, it still seems that the main bottleneck to its completion is found not at the technical level, but rather at the level of humanity’s values and willingness to do what would be required. I believe this is also a point that David and I mostly agree on, as he has likewise hinted, in various places, that the main obstacle to the Abolitionist Project will not be technical, but sociopolitical. This would give us reason to mostly prioritize the sociopolitical level on the margin — especially humanity’s values and willingness to reduce suffering. And the following consideration provides an additional reason in favor of the same conclusion.

III.

The third and most important point relates to the distribution of future (expected) suffering, and how we can best prevent worst-case outcomes. Perhaps the most intuitive way to explain this point is with an analogy to tax revenues: if one were trying to maximize tax revenues, one should focus disproportionately on collecting taxes from the richest people rather than the poorest, simply because that is where most of the money is.

The visual representation of the income distribution in the US in 2019 found below should help make this claim more intuitive.



The point is that something similar plausibly applies to future suffering: in terms of the distribution of future (expected) suffering, it seems reasonable to give disproportionate focus to the prevention of worst-case outcomes, as they contain more suffering (in expectation).

Futures in which the Abolitionist Project is completed, and in which our advocacy for the Abolitionist Project helps bring on its completion, say, a century sooner, are almost by definition not the kinds of future scenarios that contain the most suffering. That is, they are not worst-case futures in which things go very wrong and suffering gets multiplied in an out-of-control fashion.

Put more generally, it seems to me that advocating for the Abolitionist Project is not the best way to address worst-case outcomes, even if we assume that such advocacy has a positive effect in this regard. A more promising focus, it seems to me, is again to increase humanity’s overall willingness and capacity to reduce suffering (the strategy that also seems most promising for advancing the Abolitionist Project itself). And this capacity should ideally be oriented toward the avoidance of very bad outcomes — outcomes that to me seem most likely to stem from bad sociopolitical dynamics.

IV.

Relatedly, a final critical point is that there may be some downsides to framing our goal in terms of abolishing suffering, rather than in terms of minimizing suffering in expectation. One reason is that the former framing may invoke our proportion bias, or what is known in the literature as proportion dominance: our tendency to intuitively care more about helping 10 out of 10 individuals rather than helping 10 out of 100, even though the impact is in fact the same.

Minimizing suffering in expectation would entail abolishing suffering if that were indeed the way to minimize suffering in expectation, but the point is that it might not be. For instance, it could be that the way to reduce the most suffering in expectation is to instead mostly focus on reducing the probability and mitigating the expected badness of worst-case outcomes. And framing our aim in terms of abolishing suffering, rather than the more general and neutral terms of minimizing suffering in expectation, can hide this possibility somewhat. (I say a bit more about this in Section 13.3 in my book; see also this section.)

Moreover, talking about the complete abolition of suffering can leave the broader aim of reducing suffering particularly vulnerable to objections — e.g. the objection that completely abolishing suffering seems risky in a number of ways. In contrast, the aim of reducing intense suffering is much less likely to invite such objections, and is more obviously urgent and worthy of priority. This is another strategic reason to doubt that the abolitionist framing is optimal.

Lastly, it would be quite a coincidence if the actions that maximize the probability of the complete abolition of suffering were also exactly those actions that minimize extreme suffering in expectation; even as these goals are related, they are by no means the same. And hence to the extent that our main goal is to minimize extreme suffering, we should probably frame our objective in these terms rather than in abolitionist terms.

Reasons in favor of prioritizing the Abolitionist Project

To be clear, there are also things to be said in favor of an abolitionist framing. For instance, many people will probably find a focus on the mere alleviation and reduction of suffering to be too negative and insufficiently motivating, leading them to disengage and drop out. Such people may find it much more motivating if the aim of reducing suffering is coupled with an inspiring vision about the complete abolition of suffering and increasingly better states of superhappiness.

As a case in point, I think my own focus on suffering was in large part inspired by the Abolitionist Project and the The Hedonistic Imperative, which gradually, albeit very slowly, eased my optimistic mind into prioritizing suffering. Without this light and inspiring transitional bridge, I may have remained as opposed to suffering-focused views as I was eight years ago, before I encountered David’s work.

Brian Tomasik writes something similar about the influence of these ideas: “David Pearce’s The Hedonistic Imperative was very influential on my life. That book was one of the key factors that led to my focus on suffering as the most important altruistic priority.”

Likewise, informing people about technologies that can effectively reduce or even abolish certain forms of suffering, such as novel gene therapies, may give people hope that we can do something to reduce suffering, and thus help motivate action to this end.

But I think the two reasons cited above count more as reasons to include an abolitionist perspective in our “communication portfolio”, as opposed to making it our main focus — not least in light of the four considerations mentioned above that count against the abolitionist framing and focus.

A critical question

The following question may capture the main difference between David’s view and my own.

In previous conversations, David and I have clarified that we both accept that the avoidance of worst-case outcomes is, plausibly, the main priority for reducing suffering in expectation.

This premise, together with our shared moral outlook, seems to recommend a strong focus on minimizing the risk of worst-case outcomes. The critical question is thus: What reasons do we have to think that prioritizing and promoting the Abolitionist Project is the single best way, or even among the best ways, to address worst-case outcomes?

As noted above, I think there are good reasons to doubt that advocating the Abolitionist Project is among the most promising strategies to this end (say, among the top 10 causes to pursue), even if we grant that it has positive effects overall, including on worst-case outcomes in particular.

Possible responses

Analogy to smallpox

A way to respond may be to invoke the example of smallpox: Eradicating smallpox was plausibly the best way to minimize the risk of “astronomical smallpox, as opposed to focusing on other, indirect measures. So why should the same not be true in the case of suffering?

I think this is an interesting line of argument, but I think the case of smallpox is disanalogous in at least a couple of ways. First, smallpox is in a sense a much simpler and more circumscribed phenomenon than is suffering. In part for this reason, the eradication of smallpox was much easier than the abolition of suffering would be. As an infectious disease, smallpox, unlike suffering, has not evolved to serve any functional role in animals. It could thus not only be eradicated more easily, but also without unintended effects on, say, the function of the human mind.

Second, if we were primarily concerned about not spreading smallpox to space, and minimizing “smallpox-risks” in general, I think it is indeed plausible that the short-term eradication of smallpox would not be the ideal thing to prioritize with marginal resources. (Again, it is important to here distinguish what humanity at large should ideally do versus what the, say, 1,000 most dedicated suffering reducers should do with most of their resources, on the margin, in our imperfect world.)

One reason such a short-term focus may be suboptimal is that the short-term eradication of smallpox is already — or would already be, if it still existed — prioritized by mainstream organizations and governments around the world, and hence additional marginal resources would likely have a rather limited counterfactual impact to this end. Work to minimize the risk of spreading life forms vulnerable to smallpox is far more neglected, and hence does seem a fairly reasonable priority from a “smallpox-risk minimizing” perspective.

Sources of unwillingness

Another response may be to argue that humanity’s unwillingness to reduce suffering derives mostly from the sense that the problem of suffering is intractable, and hence the best way to increase our willingness to alleviate and prevent suffering is to set out technical blueprints for its prevention. In David’s words, we can have a serious ethical debate about the future of sentience only once we appreciate what is — and what isn’t — technically feasible.

I think there is something to be said in favor of this argument, as noted above in the section on reasons to favor the Abolitionist Project. Yet unfortunately, my sense is that humanity’s unwillingness to reduce suffering does not primarily stem from a sense that the problem is too vast and intractable. Sadly, it seems to me that most people give relatively little thought to the urgency of (others’) suffering, especially when it comes to the suffering of non-human beings. As David notes, factory farming can be said to be “the greatest source of severe and readily avoidable suffering in the world today”. Ending this enormous source of suffering is clearly tractable at a collective level. Yet most people still actively contribute to it rather than work against it, despite its solution being technically straightforward.

What is the best way to motivate humanity to prevent suffering?

This is an empirical question. But I would be surprised if setting out abolitionist blueprints turned out to be the single best strategy. Other candidates that seem more promising to me include informing people about horrific examples of suffering, as well as presenting reasoned arguments in favor of prioritizing the prevention of suffering.

To clarify, I am not arguing for any efforts to conserve suffering. The issue here is rather about what we should prioritize with our limited resources. The following analogy may help clarify my view: When animal advocates argue in favor of prioritizing the suffering of farm animals or wild animals rather than, say, the suffering of companion animals, they are not thereby urging us to conserve let alone increase the suffering of companion animals. The argument is rather that our limited resources seem to reduce more suffering if we spend them on these other things, even as we grant that it is a very good thing to reduce the suffering of companion animals.

In terms of how we rank the cost-effectiveness of different causes and interventions (cf. this distribution), I would still consider abolitionist advocacy to be quite beneficial all things considered, and probably significantly better than the vast majority of activities that we could pursue. But I would not quite rank it at the tail-end of the cost-effectiveness distribution, for some of the reasons outlined above.

Antinatalism and reducing suffering: A case of suspicious convergence

First published: Feb. 2021. Last update: Dec. 2022


Two positions are worth distinguishing. One is the view that we should reduce (extreme) suffering as much as we can for all sentient beings. The other is the view that we should advocate for humans not to have children.

It may seem intuitive to think that the former position implies the latter. That is, to think that the best way to reduce suffering for all sentient beings is to advocate for humans not to have children. My aim in this brief essay is to outline some of the reasons to be skeptical of this claim.

Suspicious convergence

Lewis, 2016 warns of “suspicious convergence”, which he introduces with the following toy example:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

The general point is that, for any set of distinct altruistic aims or endeavors we may consider, we should be a priori suspicious of the claim that they are perfectly convergent — i.e. that directly pursuing one of them also happens to be the very best thing we can do for achieving the other. Justifying such a belief would require good, object-level reasons. And in the case of the respective endeavors of reducing suffering and advocating for humans not to procreate, we in a sense find the opposite, as there are good reasons to be skeptical of a strong degree of convergence, and even to think that such antinatalist advocacy might increase future suffering.

The marginal impact of antinatalist advocacy

A key point when evaluating the impact of altruistic efforts is that we need to think at the margin: how does our particular contribution change the outcome, in expectation? This is true whether our aims are modest or maximally ambitious — our actions and resources still represent but a very small fraction of the total sum of actions and resources, and we can still only exert relatively small pushes toward our goals.

Direct effects

What, then, is the marginal impact of advocating for people not to have children? One way to try to answer this question is to explore the expected effects of preventing a single human birth. Antinatalist analyses of this question are quick to point out the many harms caused by a single human birth, which must indeed be considered. Yet what these analyses tend not to consider are the harms that a human birth would prevent.

For example, in his book Better Never to Have Been, David Benatar writes about “the suffering inflicted on those animals whose habitat is destroyed by encroaching humans” (p. 224) — which, again, should definitely be included in our analysis. Yet he fails to consider the many births and all the suffering that would be prevented by an additional human birth, such as due to its marginal effects on habitat reduction (“fewer people means more animals“). As Brian Tomasik argues, when we consider a wider range of the effects humans have on animal suffering, “it seems plausible that encouraging people to have fewer children actually causes an increase in suffering and involuntary births.” 

This highlights how a one-sided analysis such as Benatar’s is deeply problematic when evaluating potential interventions. We cannot simply look at the harms prevented by our pet interventions without considering how they might lead to more harm. Both things must be considered.

To be clear, the considerations above regarding the marginal effects of human births on animal suffering by no means represent a complete analysis of the effects of additional human births, or of advocating for humans not to have children. But they do represent reasons to doubt that such advocacy is among the very best things we can do to reduce suffering for all sentient beings, at least in terms of the direct effects, which leads us to the next point.

Long-term effects

Some seem to hold that the main reason to advocate against human procreation is not the direct effects, but rather its long-term effects on humanity’s future. I agree that the influence our ideas and advocacy efforts have on humanity’s long-term future are plausibly the most important thing about them, and I think many antinatalists are likely to have a positive influence in this regard by highlighting the moral significance of suffering (and the relative insignificance of pleasure).

But the question is why we should think that the best way to steer humanity’s long-term future toward less suffering is to argue for people not to have children. After all, the space of possible interventions we could pursue to reduce future suffering is vast, and it would be quite a remarkable coincidence if relatively simple interventions — such as advocating for antinatalism or veganism — happened to be the very best way to reduce suffering, or even among the very best ways.

In particular, the greatest risk from a long-term perspective is that things somehow go awfully wrong, and that we counterfactually greatly increase future suffering, either by creating additional sources of suffering in the future, or by simply failing to reduce existing forms of suffering when we could. And advocating for people not to have children seems unlikely to be among the best ways to reduce the risk of such failures — again since the space of possible interventions is vast, and interventions that are targeted more directly at reducing these risks, including the risk of leaving wild-animal suffering unaddressed, are probably significantly more effective than is advocating for humans not to procreate.

Better alternatives?

If our aim is to reduce suffering for all sentient beings, a plausible course of action would be to pursue an open-ended research project on how we can best achieve this aim. This is, after all, not a trivial question, and we should hardly expect the most plausible answers to be intuitive, let alone obvious. Exploring this question requires epistemic humility, and forces us to contend with the vast amount of empirical uncertainty that we are facing.

I have explored this question at length in Vinding, 2020, as have other individuals and organizations elsewhere. One conclusion that seems quite robust is that we should focus mostly on avoiding bad outcomes, whereas comparatively suffering-free future scenarios merit less priority. Another robust conclusion is that we should pursue a pragmatic and cooperative approach when trying to reduce suffering (see also Vinding, 2020, ch. 10) — not least since future conflicts are one of the main ways in which worst-case outcomes might materialize, and hence we should generally strive to reduce the risk of such conflicts.

In more concrete terms, antinatalists may be more effective if they focus on defending antinatalism for wild animals in particular. This case seems both easier and more important to make given the overwhelming amount of suffering and early death in nature. Such advocacy may both have more beneficial near-term and long-term effects, being less at risk of increasing non-human suffering in the near term, and plausibly being more conducive to reducing worst-case risks, whether these entail spreading non-human life or simply failing to reduce wild-animal suffering.

Broadly speaking, the aim of reducing suffering would seem to recommend efforts to identify the main ways in which humanity might cause — or prevent — vast amounts of suffering in the future, and to find out how we can best navigate accordingly. None of these conclusions seem to support efforts to convince people not to have children as a particularly promising strategy, though they likely do recommend efforts to promote concern for suffering more generally.

Suffering-focused ethics and the importance of happiness

It seems intuitive to think that suffering-focused moral views imply that it is unimportant whether people live fulfilling lives. Yet the truth, I will argue, is in many ways the opposite — especially for those who are trying to reduce suffering effectively with their limited resources.

Personal sustainability and productivity

One reason in favor of living fulfilling lives is that we cannot work to reduce suffering in sustainable ways otherwise. Indeed, not only is a reasonably satisfied mind a precondition for sustainable productivity in the long run, but also for our productivity on a day-to-day basis, which is often aided by a strong passion and excitement about our work projects. Suffering-focused ethics by no means entails that excitement and passion should be muted.

Beyond aiding our productivity in work-related contexts, a strong sense of well-being also helps us be more resilient in the face of life’s challenges — things that break, unexpected expenses, unfriendly antagonists, etc. Cultivating a sense of fulfillment and a sound mental health can help us better handle these obstacles as well.

Signaling value

This reason pertains to the social rather than the individual level. If we are trying to create change in the world, it generally does not help if we ourselves are miserable. People often decide whether they want to associate with (or distance themselves from) a group of people based on perceptions of the overall wellness and mental health of its adherents. And this is not entirely unreasonable, as these factors arguably do constitute some indication of the practical consequences of associating with the group in question.

If failing to prioritize our own well-being has bad consequences in the bigger picture, such as scaring people away from joining our efforts to create a better future, then this failure is not recommended by consequentialist suffering-focused views.

To be clear, my point here is not that suffering-focused agents should be deceptive and try to display a fake and inflated sense of well-being (such deception would likely have many bad consequences). Rather, the point is that we have good reasons to cultivate genuine physical and mental health, both for the sake of our personal productivity and our ability to inspire others.

A needless hurdle to the adoption of suffering-focused views

A closely related point has to do with people’s evaluations of suffering-focused views more directly (as opposed to the evaluations of suffering-focused communities and individuals). People are likely to judge the acceptability of a moral view based in part on the expected psychological consequences of its adoption — will it enable me to pursue the lifestyle I want, to maintain my social relationships, and to seem like a good and likeable person?

Indeed, modern moral and political psychology suggests that these social and psychological factors are strong determinants of our moral and political views, and that we usually underestimate just how much these “non-rationalist” factors influence our views (see e.g. Haidt, 2012, part III; Tuschman, 2013, ch. 22; Simler, 2016; Tooby, 2017).

This is then another good reason to seek to both emphasize and exemplify the compatibility of suffering-focused views and a healthy and fulfilling life. Again, if failing in this regard tends to prevent people from prioritizing the reduction of suffering, then a true extrapolation of suffering-focused views will militate against such a failure, and instead recommend a focus on cultivating an invitingly healthful state of mind.

In sum, there is no inherent tension between living a healthy and fulfilling life and at the same time being committed to reducing the most intense forms of suffering.

Blog at WordPress.com.

Up ↑