A virtue-based approach to reducing suffering given long-term cluelessness

This post is a follow-up to my previous essay on reducing suffering given long-term cluelessness. Long-term cluelessness is the idea that we have no clue which actions are likely to create better or worse consequences across the long-term future. In my previous post, I argued that even if we grant long-term cluelessness (a premise I remain skeptical of), we can still steer by purely consequentialist views that do not entail cluelessness and that can ground a focus on effective suffering reduction.

In this post, I will outline an alternative approach centered on virtues. I argue that even if we reject or find no guidance in any consequentialist view, we can still plausibly adopt a virtue-based approach to reducing suffering, including effective suffering reduction. Such an approach can help guide us independently of consequentialist uncertainty.


Contents

  1. What would a virtue-based approach entail?
  2. Justifications for a virtue-based approach
  3. A virtue-based approach to effective suffering reduction
  4. Conclusion

What would a virtue-based approach entail?

It can be difficult to say exactly what a virtue-based approach to reducing suffering would entail. Indeed, an absence of clear and simple rules, and responding wisely in conditions of ambiguity based on good practical judgment, are all typical features of virtue-based approaches in ethics.

That said, in the broadest terms, a virtue-based approach to suffering involves having morally appropriate attitudes, sentiments, thoughts, and behaviors toward suffering. It involves relating to suffering in the way that a morally virtuous person would relate to it.

Perhaps more straightforwardly, we can say what a virtue-based approach would definitely not involve. For example, it would obviously not involve extreme vices like sadism or cruelty, nor would it involve more common yet still serious vices like being indifferent or passive in the face of suffering.

However, a virtue-based approach would not merely involve the morally unambitious aim of avoiding serious vices. It would usually be much more ambitious than that, encouraging us to aim for moral excellence across all aspects of our character — having deep sympathy and compassion, striving to be proactively helpful, having high integrity, and so on.

In this way, a virtue-based approach may invert an intuitive assumption about the implications of cluelessness. That is, rather than seeing cluelessness as a devastating consideration that potentially opens the floodgates to immoral or insensitive behavior, we can instead see it as paving the way for a focus on moral excellence. After all, if no consequentialist reasons count against a strong focus on moral excellence under assumed cluelessness, then arguably the strongest objections against such a focus fall away. As a result, we might no longer have any plausible reason not to pursue moral excellence in our character and conduct. At a minimum, we would no longer have any convenient consequentialist-framed rationalizations for our vices.

Sure, we could retreat to simply being insensitive and disengaged in the face of suffering — or even retreat to much worse vices — but I will argue that those options are less plausible.

Justifications for a virtue-based approach

There are various possible justifications for the approach outlined above. For example, one justification might be that having excellent moral character simply reflects the kind of person we ideally want to be. For some of us, such a personal desire might in itself be a sufficient reason for adopting a virtue-based approach in some form.

Complementary justifications may derive from our moral intuitions. For instance, all else equal, we might find it intuitive that it is morally preferable to embody excellent moral character than to embody serious vices, or that it is more ethical to display basic moral virtues than to lack such virtues (see also Knutsson, 2023, sec. 7.4). (Note that this differs from the justification above in that we need not personally want to be virtuous in order to have the intuition that it is more ethical to be that way.)

We may also find some justification in contractualist considerations or considerations about what kind of society we would like to live in. For example, we may ideally want to live in a society in which people adhere to virtues of compassion and care for suffering, as well as virtues of effectiveness in reducing suffering (more on this in the next section). Under contractualist-style moral frameworks, favoring such a society would in turn give us moral reason to adhere to those virtues ourselves.

A virtue-based approach might likewise find support if we consider specific cases. For example, imagine that you are a powerful war general whose soldiers are committing heinous atrocities that you have the power to stop — with senseless torture occurring on a large scale that you can halt immediately. And imagine that, given your subjective beliefs, your otherwise favored moral views all fail to give any guidance in this situation (e.g. due to uncertainty about long-term consequences). In contrast, ending the torture would obviously be endorsed by any commonsense virtue-based stance, since that is simply what a virtuous, compassionate person would do regardless of long-term uncertainty. If we agree that ending the torture is the morally right response in a case like this, then this arguably lends some support to such a virtue-based stance (as well as to other moral stances that imply the same response).

In general terms, we may endorse a virtue-based approach partly because it provides an additional moral safety net that we can fall back on when other approaches fail. That is, even if we find it most plausible to rely on other views when these provide practical recommendations, we might still find it reasonable to rely on virtue-based approaches in case those other views fall silent. Having virtue ethics as such a supportive layer can help strengthen our foundation and robustness as moral agents.

(One could also attempt to justify a virtue-based approach by appealing to consequentialist reasoning. Indeed, it could be that promoting a non-consequentialist virtue-based stance would ultimately create better consequences than not doing so. For example, the absence of such a virtue-based stance might increase the risk of extremely harmful behavior among moral agents. However, such arguments would involve premises that are not the focus of this post.)

A virtue-based approach to effective suffering reduction

One might wonder whether a virtue-based approach can ground effective suffering reduction of any kind. That is, can a virtue-based approach ground systematic efforts to reduce suffering effectively with our limited resources? In short, yes. If one deems it virtuous to try to reduce suffering in systematic and effective ways (at least in certain decisions or domains), then a virtue-based approach could provide a moral foundation for such efforts.

For instance, if given a choice between saving 10 versus 1,000 chickens from being boiled alive, we may consider it more virtuous — more compassionate and principled — to save the 1,000, even if we had no idea whether that choice ultimately reduces more suffering across all time or across all consequences that we could potentially assess.

To take a more realistic example: in a choice between donating either to a random charity or to a charity with a strong track record of preventing suffering, we might consider it more virtuous to support the latter, even if we do not know the ultimate consequences.

How would such a virtue-based approach be different from a consequentialist approach? Broadly speaking, there can be two kinds of differences. First, a virtue-based approach might differ from a consequentialist one in terms of its practical implications. For instance, in the donation example above, a virtue-based approach might recommend that we donate to the charity with a track record of suffering prevention even if we are unable to say whether it reduces suffering across all time or across all consequences that we could potentially assess.

Second, even if a virtue-based view had all the same practical implications as some consequentialist view, there would still be a difference in the underlying normative grounding or basis of these respective views. The consequentialist view would be grounded purely in the value of consequences, whereas the virtue-based view would not be grounded purely in that (even if the disvalue of suffering may generally be regarded as the most important consideration). Instead, the virtue-based approach would (also) be grounded at least partly in the kind of person it is morally appropriate to be — the kind of person who embodies a principled and judicious compassion, among other virtues (see e.g. the opening summary in Hursthouse & Pettigrove, 2003).

In short, virtue-based views represent a distinctive way in which some version of effective suffering reduction can be grounded.

Conclusion

There are many possible moral foundations for reducing suffering (see e.g. Vinding, 2020, ch. 6; Knutsson & Vinding, 2024, sec. 2). Even if we find one particular foundation to be most plausible by far, we are not forced to rest absolutely everything on such a singular and potentially brittle basis. Instead, we can adopt many complementary foundations and approaches, including an approach centered on excellent moral character that can guide us when other frameworks might fail. I think that is a wiser approach.

Reducing suffering given long-term cluelessness

An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.”

There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the plausibility of this claim. Rather, my goal is to explore the implications if we assume that we are bound to be clueless about whether any given action overall reduces or increases suffering.

In other words, without taking a position on the conditional premise, what would be the practical implications if such cluelessness were unavoidable? Specifically, would this undermine the project of reducing suffering in effective ways? I will argue not. Even if we grant complete cluelessness and thus grant that certain moral views provide no practical recommendations, we can still reasonably give non-zero weight to other moral views that do provide practical recommendations. Indeed, we can find meaningful practical recommendations even if we hold a purely consequentialist view that is exclusively concerned with reducing suffering.


Contents

  1. A potential approach: Giving weight to scope-adjusted views
  2. Asymmetry in practical recommendations
  3. Toy models
  4. Justifications and motivations
    1. Why give weight to multiple views?
    2. Why give weight to a scope-adjusted view?
  5. Arguments I have not made
  6. Conclusion
  7. Acknowledgments

A potential approach: Giving weight to scope-adjusted views

There might be many ways to ground a reasonable focus on effective suffering reduction even if we assume complete cluelessness about long-term consequences. Here, I will merely outline one candidate option, or class of options, that strikes me as fairly reasonable.

As a way to introduce this approach, say that we fully accept consequentialism in some form (notwithstanding various arguments against being a pure consequentialist, e.g. Knutsson, 2023; Vinding, 2023). Yet despite being fully convinced of consequentialism, we are uncertain or divided about which version of consequentialism is most plausible.

In particular, while we give most weight to forms of consequentialism that entail no restrictions or discounts in its scope, we also give some weight to views that entail a more focused scope. (Note that this kind of approach need not be framed in terms of moral uncertainty, which is just one possible way to frame it. An alternative is to think in terms of degrees of acceptance or levels of agreement with these respective views, cf. Knutsson, 2023, sec. 6.6.)

To illustrate with some specific numbers, say that we give 95 percent credence to consequentialism without scope limitations or adjustments of any kind, and 5 percent credence to some form of scope-adjusted consequentialism. The latter view may be construed such that its scope roughly includes those consequences we can realistically estimate and influence without being clueless. This view is similar to what has been called “reasonable consequentialism”, the view that “an action is morally right if and only if it has the best reasonably expected consequences.” It is also similar to versions of consequentialism that are framed in terms of foreseeable or reasonably foreseeable consequences (Sinnott-Armstrong, 2003, sec. 4).

To be clear, the approach I am exploring here is not committed to any particular scope-adjusted view. The deeper point is simply that we can give non-zero weight to one or more scope-adjusted versions of consequentialism, or to scope-adjusted consequentialist components of a broader moral view. Exploring which scope-adjusted view or views might be most plausible is beyond the aims of this essay, and that question arguably warrants deeper exploration.

That being said, I will mostly focus on views centered on (something like) consequences we can realistically assess and be guided by, since something in this ballpark seems like a relatively plausible candidate for scope-adjustment. I acknowledge that there are significant challenges in clarifying the exact nature of this scope, which is likely to remain an open problem subject to continual refinement. After all, the scope of assessable consequences may grow as our knowledge and predictive power grow.

Asymmetry in practical recommendations

The relevance of the approach outlined above becomes apparent when we evaluate the practical recommendations of the clueless versus non-clueless views incorporated in this approach. A completely clueless consequentialist view would give us no recommendations about how to act, whereas a non-clueless scope-adjusted view would give us practical recommendations. (It would do so by construction if its scope includes those consequences we can realistically estimate and influence without being clueless.)

In other words, the resulting matrix of recommendations from those respective views is that the non-clueless view gives us substantive guidance, while the clueless view suggests no alternative and hence has nothing to add to those recommendations. Thus, if we hold something like the 95/5 combined consequentialist view described above — or indeed any non-zero split between these component views — it seems that we have reason to follow the non-clueless view, all things considered.

Toy models

To give a sense of what a scope-adjusted view might look like, we can consider a toy model with an exponential discount factor and an (otherwise) expected linear increase in population size:

The green area represents 99 percent of the total expected value we can influence under this view, implying that almost all the value we can meaningfully influence is found within the next 700 years.

We can also consider a model with a different discount factor and with cubic growth, reflecting the possibility of space expansion radiating from Earth:

On this model, virtually all the expected value we can meaningfully influence is found within the next 10,000 years. In both of the models above, we end up with a sort of de facto “medium-termism”.

Of course, one can vary the parameters in numerous ways and combine multiple models in ways that reflect more sophisticated views of, for example, expected future populations and discount factors. Views that involve temporal discounting allow for much greater variation than what is captured by the toy models above, including views that focus on much shorter or much longer timescales. Moreover, views that involve discounting need not be limited to temporal discounting in particular, or even be phrased in terms of temporal discounting at all. It is one way to incorporate discounting or scope-adjustments, but by no means the only one. 

Furthermore, if we give some plausibility to views that involve discounting of some kind, we need not be committed to a single view for every single domain. We may hold that the best view, or the view we give the greatest weight, will vary depending on the issue at hand (cf. Dancy, 2001; Knutsson, 2023, sec. 3). A reason for such variability may be that the scope of outcomes we can meaningfully predict often differs significantly across domains. For example, there is a stark difference in the predictability of weather systems versus planetary orbits, and similar differences in predictability might be found across various practical and policy-relevant domains.

Note also that a non-clueless scope-adjusted view need not be rigorously formalized; it could, for example, be phrased in terms of our all things considered assessments, which might be informed by myriad formal models, intuitions, considerations, and so on.

Justifications and motivations

What might justify or motivate the basic approach outlined above? This question can be broken into two sub-questions. First, why give weight to more than just a single moral view? Second, provided we give some weight to more than a single view, why give any weight to a scope-adjusted view concerned with consequences?

Why give weight to multiple views?

Reasons for giving weight to more than a single moral view or theory have been explored elsewhere (see e.g. Dancy, 2001; MacAskill et al., 2020, ch. 1; Knutsson, 2023; Vinding, 2023).

One of the reasons that have been given is that no single moral theory seems able to give satisfying answers to all moral questions (Dancy, 2001; Knutsson, 2023). And even if our preferred moral theory appears to be a plausible candidate for answering all moral questions, it is arguably still appropriate to have less than perfect confidence or acceptance in that theory (MacAskill et al., 2020, ch. 1; Vinding, 2023). Such moderation might be grounded in epistemic modesty and humility, a general skepticism toward fanaticism, and the prudence of diversifying one’s bets. It might also be grounded partly in the observation that other thoughtful people hold different moral views and that there is something to be said in favor of those views.

Likewise, giving exclusive weight to a single moral view might make us practically indifferent or paralyzed, whether it be due to cluelessness or due to underspecification as to what our preferred moral theory implies in some real-world situation. Critically, such practical indifference and paralysis may arise even in the face of the most extreme atrocities. If we find this to be an unreasonable practical implication, we arguably have reason not to give exclusive weight to a moral view that potentially implies such paralysis.

Finally, from a perspective that involves degrees of acceptance or agreement with moral views, a reason for giving weight to multiple views might simply be that those moral views each seem intuitively plausible or that we intuitively agree with them to some extent (cf. Knutsson, 2023, sec. 6.6).

Why give weight to a scope-adjusted view?

What reasons could be given for assigning weight to a scope-adjusted view in particular? One reason may be that it seems reasonable to be concerned with consequences to the extent that we can realistically estimate and be guided by them. That is arguably a sensible and intuitive scope for concern about consequences — or at least it appears sensible to some non-zero degree. If we hold this intuition, even if just to a small degree, it seems reasonable to have a final view in which we give some weight to a view focused on realistically assessable consequences (whatever the scope of those consequences ultimately turns out to be).

Some support may also be found in our moral assessments and stances toward local cases of suffering. For example, if we were confronted with an emergency situation in which some individuals were experiencing intense suffering in our immediate vicinity, and if we were readily able to alleviate this suffering, it would seem morally right to help these beings even if we cannot foresee the long-run consequences. (All theoretical and abstract talk aside, I suspect the vast majority of consequentialists would agree with that position in practice.)

Presumably, at least part of what would make such an intervention morally right is the badness of the suffering that we prevent by intervening. And if we hold that it is morally appropriate to intervene to reduce suffering in cases where we can immediately predict the consequences of doing so — namely that we alleviate the suffering right in front of us — it seems plausible to hold that this stance also generalizes to consequences that are less immediate. In other words, if this stance is sound in cases of immediate suffering prevention — or even if it just has some degree of soundness in such cases — it plausibly also has some degree of soundness when it comes to suffering prevention within a broader range of consequences that we can meaningfully estimate and influence.

This is also in line with the view that we have (at least somewhat) greater moral responsibility toward that which occurs within our local sphere of assessable influence. This view is related to, and may be justified in terms of, the “ought implies can” principle. After all, if we are bound to be clueless and unable to deliberately influence very long-run consequences, then, if we accept some version of the “ought implies can” principle, it seems that we cannot have any moral responsibility or moral duties to deliberately shape those long-run consequences — or at least such moral responsibility is plausibly diminished. In contrast, the “ought implies can” principle is perfectly consistent with moral responsibility within the scope of consequences that we realistically can estimate and deliberately influence in a meaningful way.

Thus, if we give some weight to an “ought implies can” conception of moral responsibility, this would seem to support the idea that we have (at least somewhat) greater moral responsibility toward that which occurs within our sphere of assessable influence. An alternative way to phrase it might be to say that our sphere of assessable influence is a special part of the universe for us, in that we are uniquely positioned to predict and steer events in that part compared to elsewhere, and this arguably gives us a (somewhat) special moral responsibility toward that part of the universe.

Another potential reason to give some weight to views centered on realistically assessable consequences, or more generally to views that entail discounting in some form, is that other sensible people endorse such views based on reasons that seem defensible to some degree. For example, it is common for economists to endorse models that involve temporal discounting, not just in descriptive models but also in prescriptive or normative models (see e.g. Arrow et al., 1996). The justifications for such discounting might be that our level of moral concern should be adjusted for uncertainty about whether there will be any future, uncertainty about our ability to deliberately influence the future, and the possibility that the future will be better able to take care of itself and its problems (relative to earlier problems that we could prioritize instead).

One might object that such reasons for discounting should be incorporated at a purely empirical level, without any discounting at the moral level, and I would largely agree with that sentiment. (Note that when applied at a strictly empirical or practical level, those reasons and adjustments are contenders as to how one might avoid paralysis without any discounting at the moral level.)

Yet even if we think such considerations should mostly or almost exclusively be applied at the empirical level, it might still be defensible to also invoke them to justify some measure of discounting directly at the level of one’s moral view and moral concerns, or at least as a tiny sub-component within one’s broader moral view. In other words, it might be defensible to allow empirical considerations of the kind listed above to inform and influence our fundamental moral values, at least to a small degree.

To be clear, it is not just some selection of economists who endorse normative discounting or scope-adjustment in some form. As noted above, it is also found among those who endorse “reasonable consequentialism” and consequentialism framed in terms of foreseeable consequences. And similar views can be found among people who seek to reduce suffering.

For example, Brian Tomasik has long endorsed a kind of split between reducing suffering effectively in the near term versus reducing suffering effectively across all time. In particular, regarding altruistic efforts and donations, he writes that “splitting is rational if you have more than one utility function”, and he devotes at least 40 percent of his resources toward short-term efforts to reduce suffering (Tomasik, 2015). Jesse Clifton seems to partially endorse a similar approach focused on reasons that we can realistically weigh up — an approach that in his view “probably implies restricting attention to near-term consequences” (see also Clifton, 2025). The views endorsed by Tomasik and Clifton explicitly give some degree of special weight to near-term or realistically assessable consequences, and these views and the judgments underlying them seem fairly defensible.

Lastly, it is worth emphasizing just how weak of a claim we are considering here. In particular, in the framework outlined above, all that is required for the simple practical asymmetry argument to go through is that we give any non-zero weight to a non-clueless view focused on realistically assessable consequences, or some other non-clueless view centered on consequences.

That is, we are not talking about accepting this as the most plausible view, or even as a moderately plausible view. Its role in the practical framework above is more that of a humble tiebreaker — a view that we can consult as an nth-best option if other views fail to give us guidance and if we give this kind of view just the slightest weight. And the totality of reasons listed here arguably justify that we grant it at least a tiny degree of plausibility or acceptance.

Arguments I have not made

One could argue that something akin to the approach outlined here would also be optimal for reducing suffering in expectation across all space and time. In particular, one could argue that such an unrestricted moral aim would in practice imply a focus on realistically assessable consequences. I am open to that argument — after all, it is difficult to see what else the recommended focus could be, to the extent there is one.

For similar reasons, one could argue that a practical focus on realistically assessable consequences represents a uniquely safe and reasonable bet from a consequentialist perspective: it is arguably the most plausible candidate for what a consequentialist view would recommend as a practical focus in any case, whether scope-adjusted or not. Thus, from our position of deep uncertainty — including uncertainty about whether we are bound to be clueless — it arguably makes convergent sense to try to estimate the furthest depths of assessable consequences and to seek to act on those estimates, at least to the extent that we are concerned with consequences.

Yet it is worth being clear that the argument I have made here does not rely on any of these claims or arguments. Indeed, it does not rely on any claims about what is optimal for reducing suffering across all space and time.

As suggested above, the conditional claim I have argued for here is ultimately a very weak one about giving minimal weight to what seems like a fairly moderate and in some ways commonsensical moral view or idea (e.g. it seems fairly commonsensical to be concerned with consequences to the extent that we can realistically estimate and be guided by them). The core argument presented in this essay does not require us to accept any controversial empirical positions.

Conclusion

For some of our problems, perhaps the best we can do is to find “second best solutions” — that is, solutions that do not satisfy all our preferred criteria, yet which are nevertheless better than any other realistic solution. This may also be true when it comes to reducing suffering in a potentially infinite universe. We might be in an unpredictable sea of infinite consequences that ripple outward forever (Schwitzgebel, 2024). But even if we are, this need not prevent us from trying to reduce suffering in effective and sensible ways within a realistic scope. After all, compared to simply giving up on trying to reduce suffering, it seems less arbitrary and more plausible to at least try to reduce suffering within the domain of consequences we can realistically assess and be guided by.

Acknowledgments

Thanks to Tobias Baumann, Jesse Clifton, and Simon Knutsson for helpful comments.

A convergence of moral motivations

My aim in this post is to outline a variety of motivations that all point me in broadly the same direction: toward helping others in general and prioritizing the reduction of suffering in particular.


Contents

  1. Why list these motivations?
  2. Clarification
  3. Compassion
  4. Consistency
  5. Common sense: A trivial sacrifice compared to what others might gain
  6. The horror of extreme suffering: The “game over” motivation
  7. Personal identity: I am them
  8. Fairness
  9. Status and recognition
  10. Final reflections

Why list these motivations?

There are a few reasons why I consider it worthwhile to list this variety of moral motivations. For one, I happen to find it interesting to notice that my motivations for helping others are so diverse in their nature. (That might sound like a brag, but note that I am not saying that my motivations are necessarily all that flattering or unselfish.) This diversity in motivations is not obvious a priori, and it also seems different from how moral motivations are often described. For example, reasons to help others are frequently described in terms of a singular motivation, such as compassion.

Beyond mere interest, there may also be some psychological and altruistic benefits to identifying these motivations. For instance, if we realize that our commitment to helping others rests on a wide variety of motivations, this might in turn give us a greater sense that it is a robust commitment that we can be confident in, as opposed to being some brittle commitment that rests on just a single wobbly motivation.

Relatedly, if we have a sense of confidence in our altruistic commitment, and if we are aware that it rests on a broad set of motivations, this might also help strengthen and maintain this commitment. For example, one can speculate that it may be possible to tap into extra reserves of altruistic motivation by skillfully shifting between different sources of such motivation.

Another potential benefit of becoming more aware of, and drawing on, a greater variety of altruistic motivations is that they may each trigger different cognitive styles with their own unique benefits. For example, the patterns of thought and attention that are induced by compassion are likely different from those that are induced by a sense of rigorous impartiality, and these respective patterns might well complement each other.

Lastly, being aware of our altruistic motivations could help give us greater insight into our biases. For example, if we are strongly motivated by empathic concern, we might be biased toward mostly helping cute-looking beings who appeal to our empathy circuits, like kittens and squirrels, and toward downplaying the interests of beings who may look less cute, such as lizards and cockroaches. And note that such a bias can persist even if we are also motivated by impartiality at some level. Indeed, it is a recipe for bias to think that a mere cerebral endorsement of impartiality means that we will thereby adhere to impartiality at every level of our cognition. A better awareness of our moral motivations may help us avoid such naive mistakes.

Clarification

I should clarify that this post is not meant to capture everyone’s moral motivations, nor is my aim to convince people to embrace all the motivations I outline below. Rather, my intention is first and foremost to present the moral motivations that I myself am compelled by, and which all to some extent drive me to try to reduce suffering. That being said, I do suspect that many of these motivations will tend to resonate with others as well.

Compassion

Compassion has been defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. This is similar to having empathic concern for others (compassion is often regarded as a component of empathic concern).

In contrast to some of the other motivations listed below, compassion is less cerebral and more directly felt as a motivation for helping others. For example, when we experience sympathy for someone’s misery, we hardly need to go through a sequence of inferences in order to be motivated to alleviate that misery. The motivation to help is almost baked into the sympathy itself. Indeed, studies suggest that empathic concern is a significant driver of costly altruism.

In my own case, I think compassion tends to play an important role, though I would not claim that it is sufficient or even necessary for motivating the general approach that I would endorse when it comes to helping others. One reason it is not sufficient is that it needs to be coupled with a more systematic component, which I would broadly refer to as ‘consistency’.

Consistency

As a motivation for helping others, consistency is rather different from compassion. For example, unlike compassion, consistency is cerebral in nature, to the degree that it almost has a logical or deductive character. That is, unlike compassion, consistency per se does not highlight others’ suffering or welfare from the outset. Instead, efforts to help others are more a consequence of applying consistency to our knowledge about our own direct experience: I know that intense suffering feels bad and is worth avoiding for me (all else equal), and hence, by consistency, I conclude that intense suffering feels bad and is worth avoiding for everyone (all else equal).

One might object that it is not inconsistent to view one’s own suffering as being different from the suffering of others, such as by arguing that there are relevant differences between one’s own suffering and the suffering of others. I think there are several points to discuss back and forth on this issue. However, I will not engage in such arguments here, since my aim in this section is not to defend consistency as a moral motivation, but simply to present a rough outline as to how consistency can motivate efforts to help others.

As noted above, a consistency-based motivation for helping others does not strictly require compassion. However, in psychological terms, since none of us are natural consistency-maximizers, it seems likely that compassion will usually be helpful for getting altruistic motivations off the ground in practice. Conversely, as hinted in the previous section, compassion alone is not sufficient for motivating the most effective actions for helping others. After all, one can have a strong desire to reduce suffering without having the consistency-based motivation to treat equal suffering equally and to spend one’s limited resources accordingly.

In short, the respective motivations of compassion and consistency seem to each have unique benefits that make them worth combining, and I would say that they are both core pillars in my own motivations for helping others.

Common sense: A trivial sacrifice compared to what others might gain

Another motivation that appeals to me might be described as a commonsense motivation. That is, there is a vast number of sentient beings in the world, of which I am just one, and hence the beneficial impact that I can have on other sentient beings is vastly greater than the beneficial impact I can have on my own life. After all, once my own basic needs are met, there is probably little I can do to improve my wellbeing much further. Indeed, I will likely find it more meaningful and fulfilling to try to help others than to try to improve my own happiness (cf. the paradox of hedonism and the psychological benefits of having a prosocial purpose).

Of course, it is difficult to quantify just how much greater our impact on others might be compared to our impact on ourselves. Yet given the enormous number of sentient beings who exist around us, and given that our impact potentially reaches far into the future, it is not unreasonable to think that it could be greater by at least a factor of a million (e.g. we may prevent at least million times as many instances of similarly bad suffering in expectation for others than for ourselves).

In light of this massive difference in potential impact, it feels like a no-brainer to dedicate a significant amount of resources toward helping others, especially when my own basic needs are already met. Not doing so would amount to giving several orders of magnitude greater importance to my own wellbeing than to the wellbeing of others, and I see no justification for that. Indeed, one need not endorse anything close to perfect consistency and impartiality to believe that such a massively skewed valuation is implausible. It is arguably just common sense.

The horror of extreme suffering: The “game over” motivation

A particularly strong motivation for me is the sheer horror of extreme suffering. I refer to this as the “game over” motivation because that is my reaction when I witness cases of extreme suffering: a clear sense that nothing is more important than the prevention of such extreme horrors. Game over.

One might argue that this motivation is not distinct from compassion and empathic concern in the broadest sense. And I would agree that it is a species of that broad category of motivations. But I also think there is something distinctive about this “game over” motivation compared to generic empathic concern. For example, the “game over” motivation seems meaningfully different from the motivation to help someone who is struggling in more ordinary ways. In fact, I think there is a sense in which our common circuitry of sympathetic relating practically breaks down when it comes to extreme suffering. The suffering becomes so extreme and unthinkable that our “sympathometer” crashes, and we in effect check out. This is another reason it seems accurate to describe it as a “game over” motivation.

Where the motivations listed above all serve to motivate efforts to help others in general, the motivation described in this section is more of a driver as to what, specifically, I consider the highest priority when it comes to helping others, namely to alleviate and prevent extreme suffering.

Personal identity: I am them

Another motivation derives from what may be called a universal view of personal identity, also known as open individualism. This view entails that all sentient beings are essentially different versions of you, and that there is no deep sense in which the future consciousness-moments of your future self (in the usual narrow sense) is more ‘you’ than the future consciousness-moments of other beings.

Again, I will not try to defend this view here, as opposed to just describing how it can motivate efforts to help others (for a defense, see e.g. Kolak, 2004; Leighton, 2011, ch. 7; Vinding, 2017).

I happen to accept this view of personal identity, and in my opinion it ultimately leaves no alternative but to work for the benefit of all sentient beings. In light of open individualism, it makes no more sense to endorse narrow egoism than to, say, only care about one’s own suffering on Tuesdays. Both equally amount to an arbitrary disregard of my own suffering from an open individualist perspective.

This is one of the ways in which my motivations for helping others are not necessarily all that flattering: on a psychological level, I often feel that I am selfishly trying to prevent future versions of myself from being tortured, virtually none of whom will share my name.

I would say that the “I am them” motivation is generally a strong driver for me, not in a way that changes any of the basic upshots derived from the other motivations, but in a way that reinforces them.

Fairness

Considerations and intuitions related to fairness are also motivating to me. For example, I am lucky to have been born in a relatively wealthy country, and not least to have been born as a human rather than as a tightly confined chicken in a factory farm or a preyed-upon mouse in the wild. There is no sense in which I personally deserve this luck over those who are born in conditions of extreme misery and destitution. Consequently, it is only fair that I “pay back” my relative luck by working to help those beings who were or will be much less lucky in terms of their birth conditions and the like.

I should note that this is not among my stronger or more salient motivations, but I still think it has significant appeal and that it plays some role for me.

Status and recognition

Lastly, I want to highlight the motivation that any cynic would rightly emphasize, namely to gain status and recognition. Helping others can be a way to gain recognition and esteem among our peers, and I am obviously also motivated by that.

There is quite a taboo around acknowledging this motive, but I think that is a mistake. It is simply a fact about the human mind that we want recognition, and this is not necessarily a problem in and of itself. It only becomes a problem if we allow our drive for status to corrupt our efforts to help others, which is no doubt a real risk. Yet we hardly reduce that risk by pretending that we are unaffected by these drives. On the contrary, openly admitting our status motives probably gives us a better chance of mitigating their potentially corrupting influence.

Moreover, while our status drives can impede our altruistic efforts, we should not overlook the possibility that they might sometimes do the opposite, namely improve our efforts to help others.

How could that realistically happen? One way it might happen is by forcing us to seek out the assessments of informed people. That is, if our altruistic efforts are partly driven by a motive to impress relevant experts and evaluators of our work, we might be more motivated to consider and integrate a wider range of informed perspectives (compared to if we were not motivated to impress such evaluators).

Of course, this only works if we are indeed motivated to impress an informed audience, as opposed to just any audience that may be eager to throw recognition after us. Seeking the right audience to impress — those who are impressed by genuinely helpful contributions — might thus be key to making our status drives work in favor of our altruistic efforts rather than against them (cf. Hanson, 2010; 2018). 

Another reason to believe that status drives can be helpful is that they have proven to be psychologically potent for human beings. Hence, if we could hypothetically rob a human brain of its status drives, we might well reduce its altruistic drives overall, even if other sources of altruistic motivation were kept intact. It might be tantamount to removing a critical part of an engine, or at least a part that adds a significant boost.

In terms of my own motivations, I would say that drives for status probably often do help motivate my altruistic efforts, whether I endorse my status drives or not. Yet it is difficult to estimate the strength and influence of these drives. After all, the status motive is regarded as unflattering, and hence there are reasons to think that my mind systematically downplays its influence. Moreover, like all of the motivations listed here, the status motive likely varies in strength depending on contextual factors, such as whether I am around other people or not; I suspect that it becomes weaker when I am more isolated, which in effect suggests a way to reduce my status drives when needed.

I should also note that I aspire to view my status drives with extreme suspicion. Despite my claims about how status drives could potentially be helpful, I think the default — if we do not make an intense effort to hone and properly direct our status drives — is that they distort our efforts to help others. And I think the endeavor of questioning our status drives tends to be extremely difficult, not least since status-seeking behavior can take myriad forms that do not look or feel anything like status-seeking behavior. It might just look like “conforming to the obviously reasonable views of my peers”, or like “pursuing this obscure and interesting idea that somehow feels very important”.

So a key question I try to ask myself is: am I really trying to help sentient beings, or am I mostly trying to raise my personal status? And I strive to look at my professed answers with skepticism. Fortunately, I feel that the “I am them” motivation can be a powerful tool in this regard. It essentially forces the selfish parts of my mind to ask: do I really want to gain status more than I want to prevent my future self from being tortured? If not, then I have strong reasons to try to reduce any torture-increasing inefficiencies that might be introduced by my status motives, and to try, if possible, to harness my status motives in the direction of reducing my future torment.

Final reflections

The motivations described above make up quite a complicated mix, from other-oriented compassion and fairness to what feels more like a self-oriented motivation aimed at sparing myself (in an expansive sense) from extreme suffering. I find it striking just how diverse these motivations are, and how they nonetheless — from so seemingly different starting points — can end up converging toward roughly the same goal: to reduce suffering for all sentient beings.

For me, this convergence makes the motivation to help others feel akin to a rope that is weaved from many complementary materials: even if one of the strings is occasionally weakened, the others can usually still hold the rope together.

But again, it is worth stressing that the drive for status is somewhat of an exception, in that it takes serious effort to make this drive converge toward aims that truly help other sentient beings. More generally, I think it is important to never be complacent about the potential for our status drives to corrupt our motivations to help others, even if we feel like we are driven by a strong and diverse set of altruistic motivations. Status drives are like the One Ring: powerful yet easily corrupting, and they are probably best viewed as such.

Suffering-Focused Ethics: Defense and Implications

The reduction of suffering deserves special priority. Many ethical views support this claim, yet so far these have not been presented in a single place. Suffering-Focused Ethics provides the most comprehensive presentation of suffering-focused arguments and views to date, including a moral realist case for minimizing extreme suffering. The book then explores the all-important issue of how we can best reduce suffering in practice, and outlines a coherent and pragmatic path forward.

Amazon (Kindle, paperback, and hardcover)
Apple Books, Barnes & Noble, Kobo, Scribd, Vivlio, 24symbols, Angus & Robertson
Smashwords
Paperback PDF
Audible/Amazon (audiobook)

Suffering-Focused Ethics - 3D


“An inspiring book on the world’s most important issue. Magnus Vinding makes a compelling case for suffering-focused ethics. Highly recommended.”
— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“We live in a haze, oblivious to the tremendous moral reality around us. I know of no philosopher who makes the case more resoundingly than Magnus Vinding. In radiantly clear and honest prose, he demonstrates the overwhelming ethical priority of preventing suffering. Among the book’s many powerful arguments, I would call attention to its examination of the overlapping biases that perpetuate moral unawareness. Suffering-Focused Ethics will change its readers, opening new moral and intellectual vistas. This could be the most important book you will ever read.
Jamie Mayerfeld, professor of political science at the University of Washington, author of Suffering and Moral Responsibility and The Promise of Human Rights

“In this important undertaking, Magnus Vinding methodically and convincingly argues for the overwhelming ethical importance of preventing and reducing suffering, especially of the most intense kind, and also shows the compatibility of this view with various mainstream ethical philosophies that don’t uniquely focus on suffering. His careful analytical style and comprehensive review of existing arguments make this book valuable reading for anyone who cares about what matters, or who wishes to better understand the strong rational underpinning of suffering-focused ethics.”
— Jonathan Leighton, founder of the Organisation for the Prevention of Intense Suffering, author of The Battle for Compassion: Ethics in an Apathetic Universe

“Magnus Vinding breaks the taboo: Today, the problem of suffering is the elephant in the room, because it is at the same time the most relevant and the most neglected topic at the logical interface between applied ethics, cognitive science, and the current philosophy of mind and consciousness. Nobody wants to go there. It is not good for your academic career. Only few of us have the intellectual honesty, the mental stamina, the philosophical sincerity, and the ethical earnestness to gaze into the abyss. After all, it might also gaze back into us. Magnus Vinding has what it takes. If you are looking for an entry point into the ethical landscape, if you are ready to face the philosophical relevance of extreme suffering, then this book is for you. It gives you all the information and the conceptual tools you need to develop your own approach. But are you ready?”
Thomas Metzinger, professor of philosophy at the Johannes Gutenberg University of Mainz, author of Being No One and The Ego Tunnel

Blog at WordPress.com.

Up ↑