A virtue-based approach to reducing suffering given long-term cluelessness

This post is a follow-up to my previous essay on reducing suffering given long-term cluelessness. Long-term cluelessness is the idea that we have no clue which actions are likely to create better or worse consequences across the long-term future. In my previous post, I argued that even if we grant long-term cluelessness (a premise I remain skeptical of), we can still steer by purely consequentialist views that do not entail cluelessness and that can ground a focus on effective suffering reduction.

In this post, I will outline an alternative approach centered on virtues. I argue that even if we reject or find no guidance in any consequentialist view, we can still plausibly adopt a virtue-based approach to reducing suffering, including effective suffering reduction. Such an approach can help guide us independently of consequentialist uncertainty.


Contents

  1. What would a virtue-based approach entail?
  2. Justifications for a virtue-based approach
  3. A virtue-based approach to effective suffering reduction
  4. Conclusion

What would a virtue-based approach entail?

It can be difficult to say exactly what a virtue-based approach to reducing suffering would entail. Indeed, an absence of clear and simple rules, and responding wisely in conditions of ambiguity based on good practical judgment, are all typical features of virtue-based approaches in ethics.

That said, in the broadest terms, a virtue-based approach to suffering involves having morally appropriate attitudes, sentiments, thoughts, and behaviors toward suffering. It involves relating to suffering in the way that a morally virtuous person would relate to it.

Perhaps more straightforwardly, we can say what a virtue-based approach would definitely not involve. For example, it would obviously not involve extreme vices like sadism or cruelty, nor would it involve more common yet still serious vices like being indifferent or passive in the face of suffering.

However, a virtue-based approach would not merely involve the morally unambitious aim of avoiding serious vices. It would usually be much more ambitious than that, encouraging us to aim for moral excellence across all aspects of our character — having deep sympathy and compassion, striving to be proactively helpful, having high integrity, and so on.

In this way, a virtue-based approach may invert an intuitive assumption about the implications of cluelessness. That is, rather than seeing cluelessness as a devastating consideration that potentially opens the floodgates to immoral or insensitive behavior, we can instead see it as paving the way for a focus on moral excellence. After all, if no consequentialist reasons count against a strong focus on moral excellence under assumed cluelessness, then arguably the strongest objections against such a focus fall away. As a result, we might no longer have any plausible reason not to pursue moral excellence in our character and conduct. At a minimum, we would no longer have any convenient consequentialist-framed rationalizations for our vices.

Sure, we could retreat to simply being insensitive and disengaged in the face of suffering — or even retreat to much worse vices — but I will argue that those options are less plausible.

Justifications for a virtue-based approach

There are various possible justifications for the approach outlined above. For example, one justification might be that having excellent moral character simply reflects the kind of person we ideally want to be. For some of us, such a personal desire might in itself be a sufficient reason for adopting a virtue-based approach in some form.

Complementary justifications may derive from our moral intuitions. For instance, all else equal, we might find it intuitive that it is morally preferable to embody excellent moral character than to embody serious vices, or that it is more ethical to display basic moral virtues than to lack such virtues (see also Knutsson, 2023, sec. 7.4). (Note that this differs from the justification above in that we need not personally want to be virtuous in order to have the intuition that it is more ethical to be that way.)

We may also find some justification in contractualist considerations or considerations about what kind of society we would like to live in. For example, we may ideally want to live in a society in which people adhere to virtues of compassion and care for suffering, as well as virtues of effectiveness in reducing suffering (more on this in the next section). Under contractualist-style moral frameworks, favoring such a society would in turn give us moral reason to adhere to those virtues ourselves.

A virtue-based approach might likewise find support if we consider specific cases. For example, imagine that you are a powerful war general whose soldiers are committing heinous atrocities that you have the power to stop — with senseless torture occurring on a large scale that you can halt immediately. And imagine that, given your subjective beliefs, your otherwise favored moral views all fail to give any guidance in this situation (e.g. due to uncertainty about long-term consequences). In contrast, ending the torture would obviously be endorsed by any commonsense virtue-based stance, since that is simply what a virtuous, compassionate person would do regardless of long-term uncertainty. If we agree that ending the torture is the morally right response in a case like this, then this arguably lends some support to such a virtue-based stance (as well as to other moral stances that imply the same response).

In general terms, we may endorse a virtue-based approach partly because it provides an additional moral safety net that we can fall back on when other approaches fail. That is, even if we find it most plausible to rely on other views when these provide practical recommendations, we might still find it reasonable to rely on virtue-based approaches in case those other views fall silent. Having virtue ethics as such a supportive layer can help strengthen our foundation and robustness as moral agents.

(One could also attempt to justify a virtue-based approach by appealing to consequentialist reasoning. Indeed, it could be that promoting a non-consequentialist virtue-based stance would ultimately create better consequences than not doing so. For example, the absence of such a virtue-based stance might increase the risk of extremely harmful behavior among moral agents. However, such arguments would involve premises that are not the focus of this post.)

A virtue-based approach to effective suffering reduction

One might wonder whether a virtue-based approach can ground effective suffering reduction of any kind. That is, can a virtue-based approach ground systematic efforts to reduce suffering effectively with our limited resources? In short, yes. If one deems it virtuous to try to reduce suffering in systematic and effective ways (at least in certain decisions or domains), then a virtue-based approach could provide a moral foundation for such efforts.

For instance, if given a choice between saving 10 versus 1,000 chickens from being boiled alive, we may consider it more virtuous — more compassionate and principled — to save the 1,000, even if we had no idea whether that choice ultimately reduces more suffering across all time or across all consequences that we could potentially assess.

To take a more realistic example: in a choice between donating either to a random charity or to a charity with a strong track record of preventing suffering, we might consider it more virtuous to support the latter, even if we do not know the ultimate consequences.

How would such a virtue-based approach be different from a consequentialist approach? Broadly speaking, there can be two kinds of differences. First, a virtue-based approach might differ from a consequentialist one in terms of its practical implications. For instance, in the donation example above, a virtue-based approach might recommend that we donate to the charity with a track record of suffering prevention even if we are unable to say whether it reduces suffering across all time or across all consequences that we could potentially assess.

Second, even if a virtue-based view had all the same practical implications as some consequentialist view, there would still be a difference in the underlying normative grounding or basis of these respective views. The consequentialist view would be grounded purely in the value of consequences, whereas the virtue-based view would not be grounded purely in that (even if the disvalue of suffering may generally be regarded as the most important consideration). Instead, the virtue-based approach would (also) be grounded at least partly in the kind of person it is morally appropriate to be — the kind of person who embodies a principled and judicious compassion, among other virtues (see e.g. the opening summary in Hursthouse & Pettigrove, 2003).

In short, virtue-based views represent a distinctive way in which some version of effective suffering reduction can be grounded.

Conclusion

There are many possible moral foundations for reducing suffering (see e.g. Vinding, 2020, ch. 6; Knutsson & Vinding, 2024, sec. 2). Even if we find one particular foundation to be most plausible by far, we are not forced to rest absolutely everything on such a singular and potentially brittle basis. Instead, we can adopt many complementary foundations and approaches, including an approach centered on excellent moral character that can guide us when other frameworks might fail. I think that is a wiser approach.

Reducing suffering given long-term cluelessness

An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.”

There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the plausibility of this claim. Rather, my goal is to explore the implications if we assume that we are bound to be clueless about whether any given action overall reduces or increases suffering.

In other words, without taking a position on the conditional premise, what would be the practical implications if such cluelessness were unavoidable? Specifically, would this undermine the project of reducing suffering in effective ways? I will argue not. Even if we grant complete cluelessness and thus grant that certain moral views provide no practical recommendations, we can still reasonably give non-zero weight to other moral views that do provide practical recommendations. Indeed, we can find meaningful practical recommendations even if we hold a purely consequentialist view that is exclusively concerned with reducing suffering.


Contents

  1. A potential approach: Giving weight to scope-adjusted views
  2. Asymmetry in practical recommendations
  3. Toy models
  4. Justifications and motivations
    1. Why give weight to multiple views?
    2. Why give weight to a scope-adjusted view?
  5. Arguments I have not made
  6. Conclusion
  7. Acknowledgments

A potential approach: Giving weight to scope-adjusted views

There might be many ways to ground a reasonable focus on effective suffering reduction even if we assume complete cluelessness about long-term consequences. Here, I will merely outline one candidate option, or class of options, that strikes me as fairly reasonable.

As a way to introduce this approach, say that we fully accept consequentialism in some form (notwithstanding various arguments against being a pure consequentialist, e.g. Knutsson, 2023; Vinding, 2023). Yet despite being fully convinced of consequentialism, we are uncertain or divided about which version of consequentialism is most plausible.

In particular, while we give most weight to forms of consequentialism that entail no restrictions or discounts in its scope, we also give some weight to views that entail a more focused scope. (Note that this kind of approach need not be framed in terms of moral uncertainty, which is just one possible way to frame it. An alternative is to think in terms of degrees of acceptance or levels of agreement with these respective views, cf. Knutsson, 2023, sec. 6.6.)

To illustrate with some specific numbers, say that we give 95 percent credence to consequentialism without scope limitations or adjustments of any kind, and 5 percent credence to some form of scope-adjusted consequentialism. The latter view may be construed such that its scope roughly includes those consequences we can realistically estimate and influence without being clueless. This view is similar to what has been called “reasonable consequentialism”, the view that “an action is morally right if and only if it has the best reasonably expected consequences.” It is also similar to versions of consequentialism that are framed in terms of foreseeable or reasonably foreseeable consequences (Sinnott-Armstrong, 2003, sec. 4).

To be clear, the approach I am exploring here is not committed to any particular scope-adjusted view. The deeper point is simply that we can give non-zero weight to one or more scope-adjusted versions of consequentialism, or to scope-adjusted consequentialist components of a broader moral view. Exploring which scope-adjusted view or views might be most plausible is beyond the aims of this essay, and that question arguably warrants deeper exploration.

That being said, I will mostly focus on views centered on (something like) consequences we can realistically assess and be guided by, since something in this ballpark seems like a relatively plausible candidate for scope-adjustment. I acknowledge that there are significant challenges in clarifying the exact nature of this scope, which is likely to remain an open problem subject to continual refinement. After all, the scope of assessable consequences may grow as our knowledge and predictive power grow.

Asymmetry in practical recommendations

The relevance of the approach outlined above becomes apparent when we evaluate the practical recommendations of the clueless versus non-clueless views incorporated in this approach. A completely clueless consequentialist view would give us no recommendations about how to act, whereas a non-clueless scope-adjusted view would give us practical recommendations. (It would do so by construction if its scope includes those consequences we can realistically estimate and influence without being clueless.)

In other words, the resulting matrix of recommendations from those respective views is that the non-clueless view gives us substantive guidance, while the clueless view suggests no alternative and hence has nothing to add to those recommendations. Thus, if we hold something like the 95/5 combined consequentialist view described above — or indeed any non-zero split between these component views — it seems that we have reason to follow the non-clueless view, all things considered.

Toy models

To give a sense of what a scope-adjusted view might look like, we can consider a toy model with an exponential discount factor and an (otherwise) expected linear increase in population size:

The green area represents 99 percent of the total expected value we can influence under this view, implying that almost all the value we can meaningfully influence is found within the next 700 years.

We can also consider a model with a different discount factor and with cubic growth, reflecting the possibility of space expansion radiating from Earth:

On this model, virtually all the expected value we can meaningfully influence is found within the next 10,000 years. In both of the models above, we end up with a sort of de facto “medium-termism”.

Of course, one can vary the parameters in numerous ways and combine multiple models in ways that reflect more sophisticated views of, for example, expected future populations and discount factors. Views that involve temporal discounting allow for much greater variation than what is captured by the toy models above, including views that focus on much shorter or much longer timescales. Moreover, views that involve discounting need not be limited to temporal discounting in particular, or even be phrased in terms of temporal discounting at all. It is one way to incorporate discounting or scope-adjustments, but by no means the only one. 

Furthermore, if we give some plausibility to views that involve discounting of some kind, we need not be committed to a single view for every single domain. We may hold that the best view, or the view we give the greatest weight, will vary depending on the issue at hand (cf. Dancy, 2001; Knutsson, 2023, sec. 3). A reason for such variability may be that the scope of outcomes we can meaningfully predict often differs significantly across domains. For example, there is a stark difference in the predictability of weather systems versus planetary orbits, and similar differences in predictability might be found across various practical and policy-relevant domains.

Note also that a non-clueless scope-adjusted view need not be rigorously formalized; it could, for example, be phrased in terms of our all things considered assessments, which might be informed by myriad formal models, intuitions, considerations, and so on.

Justifications and motivations

What might justify or motivate the basic approach outlined above? This question can be broken into two sub-questions. First, why give weight to more than just a single moral view? Second, provided we give some weight to more than a single view, why give any weight to a scope-adjusted view concerned with consequences?

Why give weight to multiple views?

Reasons for giving weight to more than a single moral view or theory have been explored elsewhere (see e.g. Dancy, 2001; MacAskill et al., 2020, ch. 1; Knutsson, 2023; Vinding, 2023).

One of the reasons that have been given is that no single moral theory seems able to give satisfying answers to all moral questions (Dancy, 2001; Knutsson, 2023). And even if our preferred moral theory appears to be a plausible candidate for answering all moral questions, it is arguably still appropriate to have less than perfect confidence or acceptance in that theory (MacAskill et al., 2020, ch. 1; Vinding, 2023). Such moderation might be grounded in epistemic modesty and humility, a general skepticism toward fanaticism, and the prudence of diversifying one’s bets. It might also be grounded partly in the observation that other thoughtful people hold different moral views and that there is something to be said in favor of those views.

Likewise, giving exclusive weight to a single moral view might make us practically indifferent or paralyzed, whether it be due to cluelessness or due to underspecification as to what our preferred moral theory implies in some real-world situation. Critically, such practical indifference and paralysis may arise even in the face of the most extreme atrocities. If we find this to be an unreasonable practical implication, we arguably have reason not to give exclusive weight to a moral view that potentially implies such paralysis.

Finally, from a perspective that involves degrees of acceptance or agreement with moral views, a reason for giving weight to multiple views might simply be that those moral views each seem intuitively plausible or that we intuitively agree with them to some extent (cf. Knutsson, 2023, sec. 6.6).

Why give weight to a scope-adjusted view?

What reasons could be given for assigning weight to a scope-adjusted view in particular? One reason may be that it seems reasonable to be concerned with consequences to the extent that we can realistically estimate and be guided by them. That is arguably a sensible and intuitive scope for concern about consequences — or at least it appears sensible to some non-zero degree. If we hold this intuition, even if just to a small degree, it seems reasonable to have a final view in which we give some weight to a view focused on realistically assessable consequences (whatever the scope of those consequences ultimately turns out to be).

Some support may also be found in our moral assessments and stances toward local cases of suffering. For example, if we were confronted with an emergency situation in which some individuals were experiencing intense suffering in our immediate vicinity, and if we were readily able to alleviate this suffering, it would seem morally right to help these beings even if we cannot foresee the long-run consequences. (All theoretical and abstract talk aside, I suspect the vast majority of consequentialists would agree with that position in practice.)

Presumably, at least part of what would make such an intervention morally right is the badness of the suffering that we prevent by intervening. And if we hold that it is morally appropriate to intervene to reduce suffering in cases where we can immediately predict the consequences of doing so — namely that we alleviate the suffering right in front of us — it seems plausible to hold that this stance also generalizes to consequences that are less immediate. In other words, if this stance is sound in cases of immediate suffering prevention — or even if it just has some degree of soundness in such cases — it plausibly also has some degree of soundness when it comes to suffering prevention within a broader range of consequences that we can meaningfully estimate and influence.

This is also in line with the view that we have (at least somewhat) greater moral responsibility toward that which occurs within our local sphere of assessable influence. This view is related to, and may be justified in terms of, the “ought implies can” principle. After all, if we are bound to be clueless and unable to deliberately influence very long-run consequences, then, if we accept some version of the “ought implies can” principle, it seems that we cannot have any moral responsibility or moral duties to deliberately shape those long-run consequences — or at least such moral responsibility is plausibly diminished. In contrast, the “ought implies can” principle is perfectly consistent with moral responsibility within the scope of consequences that we realistically can estimate and deliberately influence in a meaningful way.

Thus, if we give some weight to an “ought implies can” conception of moral responsibility, this would seem to support the idea that we have (at least somewhat) greater moral responsibility toward that which occurs within our sphere of assessable influence. An alternative way to phrase it might be to say that our sphere of assessable influence is a special part of the universe for us, in that we are uniquely positioned to predict and steer events in that part compared to elsewhere, and this arguably gives us a (somewhat) special moral responsibility toward that part of the universe.

Another potential reason to give some weight to views centered on realistically assessable consequences, or more generally to views that entail discounting in some form, is that other sensible people endorse such views based on reasons that seem defensible to some degree. For example, it is common for economists to endorse models that involve temporal discounting, not just in descriptive models but also in prescriptive or normative models (see e.g. Arrow et al., 1996). The justifications for such discounting might be that our level of moral concern should be adjusted for uncertainty about whether there will be any future, uncertainty about our ability to deliberately influence the future, and the possibility that the future will be better able to take care of itself and its problems (relative to earlier problems that we could prioritize instead).

One might object that such reasons for discounting should be incorporated at a purely empirical level, without any discounting at the moral level, and I would largely agree with that sentiment. (Note that when applied at a strictly empirical or practical level, those reasons and adjustments are contenders as to how one might avoid paralysis without any discounting at the moral level.)

Yet even if we think such considerations should mostly or almost exclusively be applied at the empirical level, it might still be defensible to also invoke them to justify some measure of discounting directly at the level of one’s moral view and moral concerns, or at least as a tiny sub-component within one’s broader moral view. In other words, it might be defensible to allow empirical considerations of the kind listed above to inform and influence our fundamental moral values, at least to a small degree.

To be clear, it is not just some selection of economists who endorse normative discounting or scope-adjustment in some form. As noted above, it is also found among those who endorse “reasonable consequentialism” and consequentialism framed in terms of foreseeable consequences. And similar views can be found among people who seek to reduce suffering.

For example, Brian Tomasik has long endorsed a kind of split between reducing suffering effectively in the near term versus reducing suffering effectively across all time. In particular, regarding altruistic efforts and donations, he writes that “splitting is rational if you have more than one utility function”, and he devotes at least 40 percent of his resources toward short-term efforts to reduce suffering (Tomasik, 2015). Jesse Clifton seems to partially endorse a similar approach focused on reasons that we can realistically weigh up — an approach that in his view “probably implies restricting attention to near-term consequences” (see also Clifton, 2025). The views endorsed by Tomasik and Clifton explicitly give some degree of special weight to near-term or realistically assessable consequences, and these views and the judgments underlying them seem fairly defensible.

Lastly, it is worth emphasizing just how weak of a claim we are considering here. In particular, in the framework outlined above, all that is required for the simple practical asymmetry argument to go through is that we give any non-zero weight to a non-clueless view focused on realistically assessable consequences, or some other non-clueless view centered on consequences.

That is, we are not talking about accepting this as the most plausible view, or even as a moderately plausible view. Its role in the practical framework above is more that of a humble tiebreaker — a view that we can consult as an nth-best option if other views fail to give us guidance and if we give this kind of view just the slightest weight. And the totality of reasons listed here arguably justify that we grant it at least a tiny degree of plausibility or acceptance.

Arguments I have not made

One could argue that something akin to the approach outlined here would also be optimal for reducing suffering in expectation across all space and time. In particular, one could argue that such an unrestricted moral aim would in practice imply a focus on realistically assessable consequences. I am open to that argument — after all, it is difficult to see what else the recommended focus could be, to the extent there is one.

For similar reasons, one could argue that a practical focus on realistically assessable consequences represents a uniquely safe and reasonable bet from a consequentialist perspective: it is arguably the most plausible candidate for what a consequentialist view would recommend as a practical focus in any case, whether scope-adjusted or not. Thus, from our position of deep uncertainty — including uncertainty about whether we are bound to be clueless — it arguably makes convergent sense to try to estimate the furthest depths of assessable consequences and to seek to act on those estimates, at least to the extent that we are concerned with consequences.

Yet it is worth being clear that the argument I have made here does not rely on any of these claims or arguments. Indeed, it does not rely on any claims about what is optimal for reducing suffering across all space and time.

As suggested above, the conditional claim I have argued for here is ultimately a very weak one about giving minimal weight to what seems like a fairly moderate and in some ways commonsensical moral view or idea (e.g. it seems fairly commonsensical to be concerned with consequences to the extent that we can realistically estimate and be guided by them). The core argument presented in this essay does not require us to accept any controversial empirical positions.

Conclusion

For some of our problems, perhaps the best we can do is to find “second best solutions” — that is, solutions that do not satisfy all our preferred criteria, yet which are nevertheless better than any other realistic solution. This may also be true when it comes to reducing suffering in a potentially infinite universe. We might be in an unpredictable sea of infinite consequences that ripple outward forever (Schwitzgebel, 2024). But even if we are, this need not prevent us from trying to reduce suffering in effective and sensible ways within a realistic scope. After all, compared to simply giving up on trying to reduce suffering, it seems less arbitrary and more plausible to at least try to reduce suffering within the domain of consequences we can realistically assess and be guided by.

Acknowledgments

Thanks to Tobias Baumann, Jesse Clifton, and Simon Knutsson for helpful comments.

Some pitfalls of utilitarianism

My aim in this post is to highlight and discuss what I consider to be some potential pitfalls of utilitarianism. These are not necessarily pitfalls that undermine utilitarianism at a theoretical level (although some of them might also pose a serious challenge at that level). As I see them, they are more pitfalls at the practical level, relating to how utilitarianism is sometimes talked about, thought about, and acted on in ways that may be suboptimal by the standards of utilitarianism itself.

I should note from the outset that this post is not inspired by recent events involving dishonest and ruinous behavior by utilitarian actors; I have been planning to write this post for a long time. But recent events arguably serve to highlight the importance of some of the points I raise below.


Contents

  1. Restrictive formalisms and “formalism first”
  2. Risky and harmful decision procedures
    1. Allowing speculative expected value calculations to determine our actions
    2. Underestimating the importance of emotions, virtues, and other traits of moral actors
    3. Uncertainty-induced moral permissiveness
    4. Uncertainty-induced lack of moral drive
    5. A more plausible approach
  3. The link between utilitarian judgments and Dark Triad traits: A cause for reflection
  4. Acknowledgments

Restrictive formalisms and “formalism first”

A potential pitfall of utilitarianism, in terms of how it is commonly approached, is that it can make us quick to embrace certain formalisms and conclusions, as though we have to accept them on pain of mathematical inconsistency.

Consider the following example: Alice is a utilitarian who thinks that a certain mildly enjoyable experience, x, has positive value. On Alice’s view, it is clear that no number of instances of x would be worse than a state of extreme suffering, since a state of extreme suffering and a mildly enjoyable experience are completely different categories of experience. Over time, Alice reads about different views of wellbeing and axiology, and she eventually changes her position such that she finds it more plausible that no experiential states are above a neutral state, and that no states have intrinsic positive value (i.e. she comes to embrace a minimalist axiology).

Alice thus no longer considers it plausible to assign positive value to experience x, and instead now assigns mildly negative value to the experience (e.g. because the experience is not entirely flawless; it contains some bothersome disturbances). Having changed her mind about the value of experience x, Alice now feels mathematically compelled to say that sufficiently many instances of that experience are worse than any experience of extreme suffering, even though she finds this implausible on its face — she still thinks state x and states of extreme suffering belong to wholly different categories of experience.

To be clear, the point I am trying to make here is not that the final conclusion that Alice draws is implausible. My point is rather that certain prevalent ways of formalizing value can make people feel needlessly compelled to draw particular conclusions, as though there are no coherent alternatives, when in fact there are. More generally, there may be a tendency to “put formalism first”, as it were, rather than to consider substantive plausibility first, and to then identify a coherent formalism that fits our views of substantive plausibility.

Note that the pitfall I am gesturing at here is not one that is strictly implied by utilitarianism, as one can be a utilitarian yet still reject standard formalizations of utilitarianism. But being bound to a restrictive formalization scheme nevertheless seems common, in my experience, among those who endorse or sympathize with utilitarianism.

Risky and harmful decision procedures

A standard distinction in consequentialist moral theory is that between ‘consequentialist criteria of rightness’ and ‘consequentialist decision procedures’. One might endorse a consequentialist criterion of rightness — meaning that consequences determine whether a given action is right or wrong — without necessarily endorsing consequentialist decision procedures, i.e. decision procedures in which one decides how to act based on case-by-case calculations of the expected outcomes.

Yet while this distinction is often emphasized, it still seems that utilitarianism is prone to inspire suboptimal decision procedures, also by its own standards (as a criterion of rightness). The following are a few of the ways in which utilitarianism can inspire suboptimal decision procedures, attitudes, and actions by its own standards.

Allowing speculative expected value calculations to determine our actions

A particular pitfall is to let our actions be strongly determined by speculative expected value calculations. There are various reasons why this may be suboptimal by utilitarian standards, but an important one is simply that the probabilities that go into such calculations are likely to be inaccurate. If our probability estimates on a given matter are highly uncertain and likely to change a lot as we learn more, there is a large risk that it is suboptimal to make any strong bets on our current estimates.

The robustness of a given probability estimate is thus a key factor to consider when deciding whether to act on that estimate, yet it can be easy to neglect this factor in real-world decisions.

Underestimating the importance of emotions, virtues, and other traits of moral actors

A related pitfall is to underestimate the significance of emotions, attitudes, and virtues. Specifically, if we place a strong emphasis on the consequences of actions, we might in turn be inclined to underemphasize the traits and dispositions of the moral actors themselves. Yet the traits and dispositions of moral actors are often critical to emphasize and to actively develop if we are to create better outcomes. Our cerebral faculties and our intuitive attitudinal faculties can both be seen as tools that enable us to navigate the world, and the latter are often more helpful for creating desired outcomes than the former (cf. Gigerenzer, 2001).

A specific context in which I and others have tried to argue for the importance of underlying attitudes and traits, in contrast to mere cerebral beliefs, is when it comes to animal ethics. In particular, engaging in practices that are transparently harmful and exploitative toward non-human beings is harmful not only in terms of how it directly contributes to those specific exploitative practices, but also in terms of how it shapes our emotions, attitudes, and traits — and thus ultimately our behavior.

More generally, to emphasize outcomes while placing relatively little emphasis on the traits of humans, as moral actors, seems to overlook the largely habitual and disposition-based nature of human behavior. After all, our emotions and attitudes not only play important roles in our individual motivations and actions, but also in the social incentives that influence the behavior of others (cf. Haidt, 2001).

In short, if one embraces a consequentialist criterion of rightness, it seems that there are good reasons to cultivate the temperament of a virtue ethicist and the felt attitudes of a non-consequentialist who finds certain actions unacceptable in practically all situations.

Uncertainty-induced moral permissiveness

Another pitfall is to practically surrender one’s capacity for moral judgment due to uncertainty about long-term outcomes. In its most extreme manifestations, this might amount to declaring that we do not know whether people who committed large-scale atrocities in the past acted wrongly, since we do not know the ultimate consequences of those actions. But perhaps a more typical manifestation is to fail to judge, let alone oppose, ongoing harmful actions and intolerant values (e.g. clear cases of discrimination), again with reference to uncertainty about the long-term consequences of those actions and values.

This pitfall relates to the point about dispositions and attitudes made above, in that the disposition to be willing to judge and oppose harmful actions and views plausibly has better overall consequences than a disposition to be meek and unwilling to take a strong stance against such things.

After all, while there is significant uncertainty about the long-term future, one can still make reasonable inferences about which broad directions we should ideally steer our civilization toward over the long term (e.g. toward showing concern for suffering in prudent yet morally serious ways). Utilitarians have reason to help steer the future in those directions, and to develop traits and attitudes that are commensurate with such directional changes. (See also “Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies”.)

Uncertainty-induced lack of moral drive

A related pitfall is uncertainty-induced lack of moral drive, whereby empirical uncertainty serves as a stumbling block to dedicated efforts to help others. This is probably also starkly suboptimal, for reasons similar to those outlined above: all things considered, it is likely ideal to develop a burning drive to help other sentient beings, despite uncertainty about long-term outcomes.

Perhaps the main difficulty in this respect is to know which particular project or aim is most important to work on. Yet a potential remedy to this problem (here conveyed in a short and crude fashion) might be to first make a dedicated effort toward the concrete goal of figuring out which projects or aims seem most worth pursuing — i.e. a broad and systematic search, informed by copious reading. And when one has eventually identified an aim or project that seems promising, it might be helpful to somewhat relax the “doubting modules” of our minds and to stick to that project for a while, pursuing the chosen aim with dedication (unless something clearly better comes up).

A more plausible approach

The previous sections have mostly pointed to suboptimal ways to approach utilitarian decision procedures. In this section, I want to briefly outline what I would consider a more defensible way to approach decision-making from a utilitarian perspective (whether one is a pure utilitarian or whether one merely includes a utilitarian component in one’s moral view).

I think two key facts must inform any plausible approach to utilitarian decision procedures:

  1. We have massive empirical uncertainty.
  2. We humans have a strong proclivity to deceive ourselves in self-serving ways.

These two observations carry significant implications. In short, they suggest that we should generally approach moral decisions with considerable humility, and with a strong sense of skepticism toward conclusions that are conveniently self-serving or low on integrity.

Given our massive uncertainty and our endlessly rationalizing minds, the ideal approach to utilitarian decision procedures is probably one that has a rather large distance between the initial question of “how to act” and the final decision to pursue a given action (at least when one is trying to calculate one’s way to an optimal decision). And this distance should probably be especially large if the decision that at first seems most recommendable is one that other moral views, along with common-sense intuitions, would deem profoundly wrong.

In other words, it seems that utilitarian decision procedures are best approached by assigning a fairly high prior to the judgments of other ethical views and common-sense moral intuitions (in terms of how plausible those judgments are from a utilitarian perspective), at least when these other views and intuitions converge strongly on a given conclusion. And it seems warranted to then be quite cautious and slow to update away from that prior, in part because of our massive uncertainty and our self-deceived minds. This is not to say that one could not end up with significant divergences relative to other widely endorsed moral views, but merely that such strong divergences probably need to be supported by a level of evidence that exceeds a rather high bar.

Likewise, it seems worth approaching utilitarian decision procedures with a prior that strongly favors actions of high integrity, not least because we should expect our rationalizing minds to be heavily biased toward low integrity — especially when nobody is looking.

Put briefly, it seems that a more defensible approach to utilitarian decision procedures would be animated by significant humility and would embody a strong inclination toward key virtues of integrity, kindness, honesty, etc., partly due to our strong tendency to excuse and rationalize deficiencies in these regards.

There are many studies that find a modest but significant association between proto-utilitarian judgments and the personality traits of psychopathy (impaired empathy) and Machiavellianism (manipulativeness and deceitfulness). (See Bartels & Pizarro, 2011; Koenigs et al., 2012; Gao & Tang, 2013; Djeriouat & Trémolière, 2014; Amiri & Behnezhad, 2017; Balash & Falkenbach, 2018; Karandikar et al., 2019; Halm & Möhring, 2019; Dinić et al., 2020; Bolelli, 2021; Luke & Gawronski, 2021; Schönegger, 2022.)

Specifically, the aspect of utilitarian judgment that seems most associated with psychopathy is the willingness to commit harm for the sake of the greater good, whereas endorsement of impartial beneficence — a core feature of utilitarianism and many other moral views — is associated with empathic concern, and is thus negatively associated with psychopathy (Kahane et al., 2018; Paruzel-Czachura & Farny, 2022). Another study likewise found that the connection between psychopathy and utilitarian moral judgments is in part explained by a reduced aversion to carrying out harmful acts (Patil, 2015).

Of course, whether a particular moral view, or a given feature of a moral view, is associated with certain undesirable personality traits by no means refutes that moral view. But the findings reviewed above might still be a cause for self-reflection among those of us who endorse or sympathize with some form of utilitarianism.

For example, maybe utilitarians are generally inclined to have fewer moral inhibitions compared to most people — e.g. because utilitarian reasoning might override intuitive judgments and norms, or because utilitarians are (perhaps) above average in trait Machiavellianism, in which case they might have fewer strongly felt moral inhibitions to overcome in the first place. And if utilitarians do tend to have fewer or weaker moral restraints of certain kinds, this could in turn dispose them to be less ethical in some respects, also by their own standards.

To be clear, this is all somewhat speculative. Yet, at the same time, these speculations are not wholly unmotivated. In terms of potential upshots, it seems that a utilitarian proneness to reduced moral restraint, if real, would give utilitarian actors additional reason to be skeptical of inclinations to disregard common moral inhibitions against harmful acts and low-integrity behavior. In short, it would give utilitarians even more reason to err on the side of integrity.

Acknowledgments

For helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

Antinatalism and reducing suffering: A case of suspicious convergence

First published: Feb. 2021. Last update: Dec. 2022


Two positions are worth distinguishing. One is the view that we should reduce (extreme) suffering as much as we can for all sentient beings. The other is the view that we should advocate for humans not to have children.

It may seem intuitive to think that the former position implies the latter. That is, to think that the best way to reduce suffering for all sentient beings is to advocate for humans not to have children. My aim in this brief essay is to outline some of the reasons to be skeptical of this claim.

Suspicious convergence

Lewis, 2016 warns of “suspicious convergence”, which he introduces with the following toy example:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

The general point is that, for any set of distinct altruistic aims or endeavors we may consider, we should be a priori suspicious of the claim that they are perfectly convergent — i.e. that directly pursuing one of them also happens to be the very best thing we can do for achieving the other. Justifying such a belief would require good, object-level reasons. And in the case of the respective endeavors of reducing suffering and advocating for humans not to procreate, we in a sense find the opposite, as there are good reasons to be skeptical of a strong degree of convergence, and even to think that such antinatalist advocacy might increase future suffering.

The marginal impact of antinatalist advocacy

A key point when evaluating the impact of altruistic efforts is that we need to think at the margin: how does our particular contribution change the outcome, in expectation? This is true whether our aims are modest or maximally ambitious — our actions and resources still represent but a very small fraction of the total sum of actions and resources, and we can still only exert relatively small pushes toward our goals.

Direct effects

What, then, is the marginal impact of advocating for people not to have children? One way to try to answer this question is to explore the expected effects of preventing a single human birth. Antinatalist analyses of this question are quick to point out the many harms caused by a single human birth, which must indeed be considered. Yet what these analyses tend not to consider are the harms that a human birth would prevent.

For example, in his book Better Never to Have Been, David Benatar writes about “the suffering inflicted on those animals whose habitat is destroyed by encroaching humans” (p. 224) — which, again, should definitely be included in our analysis. Yet he fails to consider the many births and all the suffering that would be prevented by an additional human birth, such as due to its marginal effects on habitat reduction (“fewer people means more animals“). As Brian Tomasik argues, when we consider a wider range of the effects humans have on animal suffering, “it seems plausible that encouraging people to have fewer children actually causes an increase in suffering and involuntary births.” 

This highlights how a one-sided analysis such as Benatar’s is deeply problematic when evaluating potential interventions. We cannot simply look at the harms prevented by our pet interventions without considering how they might lead to more harm. Both things must be considered.

To be clear, the considerations above regarding the marginal effects of human births on animal suffering by no means represent a complete analysis of the effects of additional human births, or of advocating for humans not to have children. But they do represent reasons to doubt that such advocacy is among the very best things we can do to reduce suffering for all sentient beings, at least in terms of the direct effects, which leads us to the next point.

Long-term effects

Some seem to hold that the main reason to advocate against human procreation is not the direct effects, but rather its long-term effects on humanity’s future. I agree that the influence our ideas and advocacy efforts have on humanity’s long-term future are plausibly the most important thing about them, and I think many antinatalists are likely to have a positive influence in this regard by highlighting the moral significance of suffering (and the relative insignificance of pleasure).

But the question is why we should think that the best way to steer humanity’s long-term future toward less suffering is to argue for people not to have children. After all, the space of possible interventions we could pursue to reduce future suffering is vast, and it would be quite a remarkable coincidence if relatively simple interventions — such as advocating for antinatalism or veganism — happened to be the very best way to reduce suffering, or even among the very best ways.

In particular, the greatest risk from a long-term perspective is that things somehow go awfully wrong, and that we counterfactually greatly increase future suffering, either by creating additional sources of suffering in the future, or by simply failing to reduce existing forms of suffering when we could. And advocating for people not to have children seems unlikely to be among the best ways to reduce the risk of such failures — again since the space of possible interventions is vast, and interventions that are targeted more directly at reducing these risks, including the risk of leaving wild-animal suffering unaddressed, are probably significantly more effective than is advocating for humans not to procreate.

Better alternatives?

If our aim is to reduce suffering for all sentient beings, a plausible course of action would be to pursue an open-ended research project on how we can best achieve this aim. This is, after all, not a trivial question, and we should hardly expect the most plausible answers to be intuitive, let alone obvious. Exploring this question requires epistemic humility, and forces us to contend with the vast amount of empirical uncertainty that we are facing.

I have explored this question at length in Vinding, 2020, as have other individuals and organizations elsewhere. One conclusion that seems quite robust is that we should focus mostly on avoiding bad outcomes, whereas comparatively suffering-free future scenarios merit less priority. Another robust conclusion is that we should pursue a pragmatic and cooperative approach when trying to reduce suffering (see also Vinding, 2020, ch. 10) — not least since future conflicts are one of the main ways in which worst-case outcomes might materialize, and hence we should generally strive to reduce the risk of such conflicts.

In more concrete terms, antinatalists may be more effective if they focus on defending antinatalism for wild animals in particular. This case seems both easier and more important to make given the overwhelming amount of suffering and early death in nature. Such advocacy may both have more beneficial near-term and long-term effects, being less at risk of increasing non-human suffering in the near term, and plausibly being more conducive to reducing worst-case risks, whether these entail spreading non-human life or simply failing to reduce wild-animal suffering.

Broadly speaking, the aim of reducing suffering would seem to recommend efforts to identify the main ways in which humanity might cause — or prevent — vast amounts of suffering in the future, and to find out how we can best navigate accordingly. None of these conclusions seem to support efforts to convince people not to have children as a particularly promising strategy, though they likely do recommend efforts to promote concern for suffering more generally.

Blog at WordPress.com.

Up ↑