A virtue-based approach to reducing suffering given long-term cluelessness

This post is a follow-up to my previous essay on reducing suffering given long-term cluelessness. Long-term cluelessness is the idea that we have no clue which actions are likely to create better or worse consequences across the long-term future. In my previous post, I argued that even if we grant long-term cluelessness (a premise I remain skeptical of), we can still steer by purely consequentialist views that do not entail cluelessness and that can ground a focus on effective suffering reduction.

In this post, I will outline an alternative approach centered on virtues. I argue that even if we reject or find no guidance in any consequentialist view, we can still plausibly adopt a virtue-based approach to reducing suffering, including effective suffering reduction. Such an approach can help guide us independently of consequentialist uncertainty.


Contents

  1. What would a virtue-based approach entail?
  2. Justifications for a virtue-based approach
  3. A virtue-based approach to effective suffering reduction
  4. Conclusion

What would a virtue-based approach entail?

It can be difficult to say exactly what a virtue-based approach to reducing suffering would entail. Indeed, an absence of clear and simple rules, and responding wisely in conditions of ambiguity based on good practical judgment, are all typical features of virtue-based approaches in ethics.

That said, in the broadest terms, a virtue-based approach to suffering involves having morally appropriate attitudes, sentiments, thoughts, and behaviors toward suffering. It involves relating to suffering in the way that a morally virtuous person would relate to it.

Perhaps more straightforwardly, we can say what a virtue-based approach would definitely not involve. For example, it would obviously not involve extreme vices like sadism or cruelty, nor would it involve more common yet still serious vices like being indifferent or passive in the face of suffering.

However, a virtue-based approach would not merely involve the morally unambitious aim of avoiding serious vices. It would usually be much more ambitious than that, encouraging us to aim for moral excellence across all aspects of our character — having deep sympathy and compassion, striving to be proactively helpful, having high integrity, and so on.

In this way, a virtue-based approach may invert an intuitive assumption about the implications of cluelessness. That is, rather than seeing cluelessness as a devastating consideration that potentially opens the floodgates to immoral or insensitive behavior, we can instead see it as paving the way for a focus on moral excellence. After all, if no consequentialist reasons count against a strong focus on moral excellence under assumed cluelessness, then arguably the strongest objections against such a focus fall away. As a result, we might no longer have any plausible reason not to pursue moral excellence in our character and conduct. At a minimum, we would no longer have any convenient consequentialist-framed rationalizations for our vices.

Sure, we could retreat to simply being insensitive and disengaged in the face of suffering — or even retreat to much worse vices — but I will argue that those options are less plausible.

Justifications for a virtue-based approach

There are various possible justifications for the approach outlined above. For example, one justification might be that having excellent moral character simply reflects the kind of person we ideally want to be. For some of us, such a personal desire might in itself be a sufficient reason for adopting a virtue-based approach in some form.

Complementary justifications may derive from our moral intuitions. For instance, all else equal, we might find it intuitive that it is morally preferable to embody excellent moral character than to embody serious vices, or that it is more ethical to display basic moral virtues than to lack such virtues (see also Knutsson, 2023, sec. 7.4). (Note that this differs from the justification above in that we need not personally want to be virtuous in order to have the intuition that it is more ethical to be that way.)

We may also find some justification in contractualist considerations or considerations about what kind of society we would like to live in. For example, we may ideally want to live in a society in which people adhere to virtues of compassion and care for suffering, as well as virtues of effectiveness in reducing suffering (more on this in the next section). Under contractualist-style moral frameworks, favoring such a society would in turn give us moral reason to adhere to those virtues ourselves.

A virtue-based approach might likewise find support if we consider specific cases. For example, imagine that you are a powerful war general whose soldiers are committing heinous atrocities that you have the power to stop — with senseless torture occurring on a large scale that you can halt immediately. And imagine that, given your subjective beliefs, your otherwise favored moral views all fail to give any guidance in this situation (e.g. due to uncertainty about long-term consequences). In contrast, ending the torture would obviously be endorsed by any commonsense virtue-based stance, since that is simply what a virtuous, compassionate person would do regardless of long-term uncertainty. If we agree that ending the torture is the morally right response in a case like this, then this arguably lends some support to such a virtue-based stance (as well as to other moral stances that imply the same response).

In general terms, we may endorse a virtue-based approach partly because it provides an additional moral safety net that we can fall back on when other approaches fail. That is, even if we find it most plausible to rely on other views when these provide practical recommendations, we might still find it reasonable to rely on virtue-based approaches in case those other views fall silent. Having virtue ethics as such a supportive layer can help strengthen our foundation and robustness as moral agents.

(One could also attempt to justify a virtue-based approach by appealing to consequentialist reasoning. Indeed, it could be that promoting a non-consequentialist virtue-based stance would ultimately create better consequences than not doing so. For example, the absence of such a virtue-based stance might increase the risk of extremely harmful behavior among moral agents. However, such arguments would involve premises that are not the focus of this post.)

A virtue-based approach to effective suffering reduction

One might wonder whether a virtue-based approach can ground effective suffering reduction of any kind. That is, can a virtue-based approach ground systematic efforts to reduce suffering effectively with our limited resources? In short, yes. If one deems it virtuous to try to reduce suffering in systematic and effective ways (at least in certain decisions or domains), then a virtue-based approach could provide a moral foundation for such efforts.

For instance, if given a choice between saving 10 versus 1,000 chickens from being boiled alive, we may consider it more virtuous — more compassionate and principled — to save the 1,000, even if we had no idea whether that choice ultimately reduces more suffering across all time or across all consequences that we could potentially assess.

To take a more realistic example: in a choice between donating either to a random charity or to a charity with a strong track record of preventing suffering, we might consider it more virtuous to support the latter, even if we do not know the ultimate consequences.

How would such a virtue-based approach be different from a consequentialist approach? Broadly speaking, there can be two kinds of differences. First, a virtue-based approach might differ from a consequentialist one in terms of its practical implications. For instance, in the donation example above, a virtue-based approach might recommend that we donate to the charity with a track record of suffering prevention even if we are unable to say whether it reduces suffering across all time or across all consequences that we could potentially assess.

Second, even if a virtue-based view had all the same practical implications as some consequentialist view, there would still be a difference in the underlying normative grounding or basis of these respective views. The consequentialist view would be grounded purely in the value of consequences, whereas the virtue-based view would not be grounded purely in that (even if the disvalue of suffering may generally be regarded as the most important consideration). Instead, the virtue-based approach would (also) be grounded at least partly in the kind of person it is morally appropriate to be — the kind of person who embodies a principled and judicious compassion, among other virtues (see e.g. the opening summary in Hursthouse & Pettigrove, 2003).

In short, virtue-based views represent a distinctive way in which some version of effective suffering reduction can be grounded.

Conclusion

There are many possible moral foundations for reducing suffering (see e.g. Vinding, 2020, ch. 6; Knutsson & Vinding, 2024, sec. 2). Even if we find one particular foundation to be most plausible by far, we are not forced to rest absolutely everything on such a singular and potentially brittle basis. Instead, we can adopt many complementary foundations and approaches, including an approach centered on excellent moral character that can guide us when other frameworks might fail. I think that is a wiser approach.

Reducing suffering given long-term cluelessness

An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.”

There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the plausibility of this claim. Rather, my goal is to explore the implications if we assume that we are bound to be clueless about whether any given action overall reduces or increases suffering.

In other words, without taking a position on the conditional premise, what would be the practical implications if such cluelessness were unavoidable? Specifically, would this undermine the project of reducing suffering in effective ways? I will argue not. Even if we grant complete cluelessness and thus grant that certain moral views provide no practical recommendations, we can still reasonably give non-zero weight to other moral views that do provide practical recommendations. Indeed, we can find meaningful practical recommendations even if we hold a purely consequentialist view that is exclusively concerned with reducing suffering.


Contents

  1. A potential approach: Giving weight to scope-adjusted views
  2. Asymmetry in practical recommendations
  3. Toy models
  4. Justifications and motivations
    1. Why give weight to multiple views?
    2. Why give weight to a scope-adjusted view?
  5. Arguments I have not made
  6. Conclusion
  7. Acknowledgments

A potential approach: Giving weight to scope-adjusted views

There might be many ways to ground a reasonable focus on effective suffering reduction even if we assume complete cluelessness about long-term consequences. Here, I will merely outline one candidate option, or class of options, that strikes me as fairly reasonable.

As a way to introduce this approach, say that we fully accept consequentialism in some form (notwithstanding various arguments against being a pure consequentialist, e.g. Knutsson, 2023; Vinding, 2023). Yet despite being fully convinced of consequentialism, we are uncertain or divided about which version of consequentialism is most plausible.

In particular, while we give most weight to forms of consequentialism that entail no restrictions or discounts in its scope, we also give some weight to views that entail a more focused scope. (Note that this kind of approach need not be framed in terms of moral uncertainty, which is just one possible way to frame it. An alternative is to think in terms of degrees of acceptance or levels of agreement with these respective views, cf. Knutsson, 2023, sec. 6.6.)

To illustrate with some specific numbers, say that we give 95 percent credence to consequentialism without scope limitations or adjustments of any kind, and 5 percent credence to some form of scope-adjusted consequentialism. The latter view may be construed such that its scope roughly includes those consequences we can realistically estimate and influence without being clueless. This view is similar to what has been called “reasonable consequentialism”, the view that “an action is morally right if and only if it has the best reasonably expected consequences.” It is also similar to versions of consequentialism that are framed in terms of foreseeable or reasonably foreseeable consequences (Sinnott-Armstrong, 2003, sec. 4).

To be clear, the approach I am exploring here is not committed to any particular scope-adjusted view. The deeper point is simply that we can give non-zero weight to one or more scope-adjusted versions of consequentialism, or to scope-adjusted consequentialist components of a broader moral view. Exploring which scope-adjusted view or views might be most plausible is beyond the aims of this essay, and that question arguably warrants deeper exploration.

That being said, I will mostly focus on views centered on (something like) consequences we can realistically assess and be guided by, since something in this ballpark seems like a relatively plausible candidate for scope-adjustment. I acknowledge that there are significant challenges in clarifying the exact nature of this scope, which is likely to remain an open problem subject to continual refinement. After all, the scope of assessable consequences may grow as our knowledge and predictive power grow.

Asymmetry in practical recommendations

The relevance of the approach outlined above becomes apparent when we evaluate the practical recommendations of the clueless versus non-clueless views incorporated in this approach. A completely clueless consequentialist view would give us no recommendations about how to act, whereas a non-clueless scope-adjusted view would give us practical recommendations. (It would do so by construction if its scope includes those consequences we can realistically estimate and influence without being clueless.)

In other words, the resulting matrix of recommendations from those respective views is that the non-clueless view gives us substantive guidance, while the clueless view suggests no alternative and hence has nothing to add to those recommendations. Thus, if we hold something like the 95/5 combined consequentialist view described above — or indeed any non-zero split between these component views — it seems that we have reason to follow the non-clueless view, all things considered.

Toy models

To give a sense of what a scope-adjusted view might look like, we can consider a toy model with an exponential discount factor and an (otherwise) expected linear increase in population size:

The green area represents 99 percent of the total expected value we can influence under this view, implying that almost all the value we can meaningfully influence is found within the next 700 years.

We can also consider a model with a different discount factor and with cubic growth, reflecting the possibility of space expansion radiating from Earth:

On this model, virtually all the expected value we can meaningfully influence is found within the next 10,000 years. In both of the models above, we end up with a sort of de facto “medium-termism”.

Of course, one can vary the parameters in numerous ways and combine multiple models in ways that reflect more sophisticated views of, for example, expected future populations and discount factors. Views that involve temporal discounting allow for much greater variation than what is captured by the toy models above, including views that focus on much shorter or much longer timescales. Moreover, views that involve discounting need not be limited to temporal discounting in particular, or even be phrased in terms of temporal discounting at all. It is one way to incorporate discounting or scope-adjustments, but by no means the only one. 

Furthermore, if we give some plausibility to views that involve discounting of some kind, we need not be committed to a single view for every single domain. We may hold that the best view, or the view we give the greatest weight, will vary depending on the issue at hand (cf. Dancy, 2001; Knutsson, 2023, sec. 3). A reason for such variability may be that the scope of outcomes we can meaningfully predict often differs significantly across domains. For example, there is a stark difference in the predictability of weather systems versus planetary orbits, and similar differences in predictability might be found across various practical and policy-relevant domains.

Note also that a non-clueless scope-adjusted view need not be rigorously formalized; it could, for example, be phrased in terms of our all things considered assessments, which might be informed by myriad formal models, intuitions, considerations, and so on.

Justifications and motivations

What might justify or motivate the basic approach outlined above? This question can be broken into two sub-questions. First, why give weight to more than just a single moral view? Second, provided we give some weight to more than a single view, why give any weight to a scope-adjusted view concerned with consequences?

Why give weight to multiple views?

Reasons for giving weight to more than a single moral view or theory have been explored elsewhere (see e.g. Dancy, 2001; MacAskill et al., 2020, ch. 1; Knutsson, 2023; Vinding, 2023).

One of the reasons that have been given is that no single moral theory seems able to give satisfying answers to all moral questions (Dancy, 2001; Knutsson, 2023). And even if our preferred moral theory appears to be a plausible candidate for answering all moral questions, it is arguably still appropriate to have less than perfect confidence or acceptance in that theory (MacAskill et al., 2020, ch. 1; Vinding, 2023). Such moderation might be grounded in epistemic modesty and humility, a general skepticism toward fanaticism, and the prudence of diversifying one’s bets. It might also be grounded partly in the observation that other thoughtful people hold different moral views and that there is something to be said in favor of those views.

Likewise, giving exclusive weight to a single moral view might make us practically indifferent or paralyzed, whether it be due to cluelessness or due to underspecification as to what our preferred moral theory implies in some real-world situation. Critically, such practical indifference and paralysis may arise even in the face of the most extreme atrocities. If we find this to be an unreasonable practical implication, we arguably have reason not to give exclusive weight to a moral view that potentially implies such paralysis.

Finally, from a perspective that involves degrees of acceptance or agreement with moral views, a reason for giving weight to multiple views might simply be that those moral views each seem intuitively plausible or that we intuitively agree with them to some extent (cf. Knutsson, 2023, sec. 6.6).

Why give weight to a scope-adjusted view?

What reasons could be given for assigning weight to a scope-adjusted view in particular? One reason may be that it seems reasonable to be concerned with consequences to the extent that we can realistically estimate and be guided by them. That is arguably a sensible and intuitive scope for concern about consequences — or at least it appears sensible to some non-zero degree. If we hold this intuition, even if just to a small degree, it seems reasonable to have a final view in which we give some weight to a view focused on realistically assessable consequences (whatever the scope of those consequences ultimately turns out to be).

Some support may also be found in our moral assessments and stances toward local cases of suffering. For example, if we were confronted with an emergency situation in which some individuals were experiencing intense suffering in our immediate vicinity, and if we were readily able to alleviate this suffering, it would seem morally right to help these beings even if we cannot foresee the long-run consequences. (All theoretical and abstract talk aside, I suspect the vast majority of consequentialists would agree with that position in practice.)

Presumably, at least part of what would make such an intervention morally right is the badness of the suffering that we prevent by intervening. And if we hold that it is morally appropriate to intervene to reduce suffering in cases where we can immediately predict the consequences of doing so — namely that we alleviate the suffering right in front of us — it seems plausible to hold that this stance also generalizes to consequences that are less immediate. In other words, if this stance is sound in cases of immediate suffering prevention — or even if it just has some degree of soundness in such cases — it plausibly also has some degree of soundness when it comes to suffering prevention within a broader range of consequences that we can meaningfully estimate and influence.

This is also in line with the view that we have (at least somewhat) greater moral responsibility toward that which occurs within our local sphere of assessable influence. This view is related to, and may be justified in terms of, the “ought implies can” principle. After all, if we are bound to be clueless and unable to deliberately influence very long-run consequences, then, if we accept some version of the “ought implies can” principle, it seems that we cannot have any moral responsibility or moral duties to deliberately shape those long-run consequences — or at least such moral responsibility is plausibly diminished. In contrast, the “ought implies can” principle is perfectly consistent with moral responsibility within the scope of consequences that we realistically can estimate and deliberately influence in a meaningful way.

Thus, if we give some weight to an “ought implies can” conception of moral responsibility, this would seem to support the idea that we have (at least somewhat) greater moral responsibility toward that which occurs within our sphere of assessable influence. An alternative way to phrase it might be to say that our sphere of assessable influence is a special part of the universe for us, in that we are uniquely positioned to predict and steer events in that part compared to elsewhere, and this arguably gives us a (somewhat) special moral responsibility toward that part of the universe.

Another potential reason to give some weight to views centered on realistically assessable consequences, or more generally to views that entail discounting in some form, is that other sensible people endorse such views based on reasons that seem defensible to some degree. For example, it is common for economists to endorse models that involve temporal discounting, not just in descriptive models but also in prescriptive or normative models (see e.g. Arrow et al., 1996). The justifications for such discounting might be that our level of moral concern should be adjusted for uncertainty about whether there will be any future, uncertainty about our ability to deliberately influence the future, and the possibility that the future will be better able to take care of itself and its problems (relative to earlier problems that we could prioritize instead).

One might object that such reasons for discounting should be incorporated at a purely empirical level, without any discounting at the moral level, and I would largely agree with that sentiment. (Note that when applied at a strictly empirical or practical level, those reasons and adjustments are contenders as to how one might avoid paralysis without any discounting at the moral level.)

Yet even if we think such considerations should mostly or almost exclusively be applied at the empirical level, it might still be defensible to also invoke them to justify some measure of discounting directly at the level of one’s moral view and moral concerns, or at least as a tiny sub-component within one’s broader moral view. In other words, it might be defensible to allow empirical considerations of the kind listed above to inform and influence our fundamental moral values, at least to a small degree.

To be clear, it is not just some selection of economists who endorse normative discounting or scope-adjustment in some form. As noted above, it is also found among those who endorse “reasonable consequentialism” and consequentialism framed in terms of foreseeable consequences. And similar views can be found among people who seek to reduce suffering.

For example, Brian Tomasik has long endorsed a kind of split between reducing suffering effectively in the near term versus reducing suffering effectively across all time. In particular, regarding altruistic efforts and donations, he writes that “splitting is rational if you have more than one utility function”, and he devotes at least 40 percent of his resources toward short-term efforts to reduce suffering (Tomasik, 2015). Jesse Clifton seems to partially endorse a similar approach focused on reasons that we can realistically weigh up — an approach that in his view “probably implies restricting attention to near-term consequences” (see also Clifton, 2025). The views endorsed by Tomasik and Clifton explicitly give some degree of special weight to near-term or realistically assessable consequences, and these views and the judgments underlying them seem fairly defensible.

Lastly, it is worth emphasizing just how weak of a claim we are considering here. In particular, in the framework outlined above, all that is required for the simple practical asymmetry argument to go through is that we give any non-zero weight to a non-clueless view focused on realistically assessable consequences, or some other non-clueless view centered on consequences.

That is, we are not talking about accepting this as the most plausible view, or even as a moderately plausible view. Its role in the practical framework above is more that of a humble tiebreaker — a view that we can consult as an nth-best option if other views fail to give us guidance and if we give this kind of view just the slightest weight. And the totality of reasons listed here arguably justify that we grant it at least a tiny degree of plausibility or acceptance.

Arguments I have not made

One could argue that something akin to the approach outlined here would also be optimal for reducing suffering in expectation across all space and time. In particular, one could argue that such an unrestricted moral aim would in practice imply a focus on realistically assessable consequences. I am open to that argument — after all, it is difficult to see what else the recommended focus could be, to the extent there is one.

For similar reasons, one could argue that a practical focus on realistically assessable consequences represents a uniquely safe and reasonable bet from a consequentialist perspective: it is arguably the most plausible candidate for what a consequentialist view would recommend as a practical focus in any case, whether scope-adjusted or not. Thus, from our position of deep uncertainty — including uncertainty about whether we are bound to be clueless — it arguably makes convergent sense to try to estimate the furthest depths of assessable consequences and to seek to act on those estimates, at least to the extent that we are concerned with consequences.

Yet it is worth being clear that the argument I have made here does not rely on any of these claims or arguments. Indeed, it does not rely on any claims about what is optimal for reducing suffering across all space and time.

As suggested above, the conditional claim I have argued for here is ultimately a very weak one about giving minimal weight to what seems like a fairly moderate and in some ways commonsensical moral view or idea (e.g. it seems fairly commonsensical to be concerned with consequences to the extent that we can realistically estimate and be guided by them). The core argument presented in this essay does not require us to accept any controversial empirical positions.

Conclusion

For some of our problems, perhaps the best we can do is to find “second best solutions” — that is, solutions that do not satisfy all our preferred criteria, yet which are nevertheless better than any other realistic solution. This may also be true when it comes to reducing suffering in a potentially infinite universe. We might be in an unpredictable sea of infinite consequences that ripple outward forever (Schwitzgebel, 2024). But even if we are, this need not prevent us from trying to reduce suffering in effective and sensible ways within a realistic scope. After all, compared to simply giving up on trying to reduce suffering, it seems less arbitrary and more plausible to at least try to reduce suffering within the domain of consequences we can realistically assess and be guided by.

Acknowledgments

Thanks to Tobias Baumann, Jesse Clifton, and Simon Knutsson for helpful comments.

Thoughts on AI pause

Whether to push for an AI pause is a hotly debated question. This post contains some of my thoughts on the issue of AI pause and the discourse that surrounds it.


Contents

  1. The motivation for an AI pause
  2. My thoughts on AI pause, in brief
  3. My thoughts on AI pause discourse
  4. Massive moral urgency: Yes, in both categories of worst-case risks

The motivation for an AI pause

Generally speaking, it seems that the primary motivation behind pushing for an AI pause is that work on AI safety is far from where it needs to be for humanity to maintain control of future AI progress. Therefore, a pause is needed so that work on AI safety — and other related work, such as AI governance — can catch up with the pace of progress in AI capabilities.

My thoughts on AI pause, in brief

Whether it is worth pushing for an AI pause obviously depends on various factors. For one, it depends on the opportunity cost: what could we be doing otherwise? After all, even if one thinks that an AI pause is desirable, one might still have reservations about its tractability compared to other aims. And even if one thinks that an AI pause is both desirable and tractable, there might still be other aims and activities that are even more beneficial (in expectation), such as working on worst-case AI safety (Gloor, 2016; Yudkowsky, 2017; Baumann, 2018), or increasing the priority that people devote to reducing risks of astronomical suffering (s-risks) (Althaus & Gloor, 2016; Baumann 2017; 2022; DiGiovanni, 2021).

Furthermore, there is the question of whether an AI pause would even be beneficial in the first place. This is a complicated question, and I will not explore it in detail here. (For a critical take, see “AI Pause Will Likely Backfire” by Nora Belrose.) Suffice it to say that, in my view, it seems highly uncertain whether any realistic AI pause would be beneficial overall — not just from a suffering-focused perspective, but from the perspective of virtually all impartial value systems. It seems to me that most advocates for AI pause are quite overconfident on this issue.

But to clarify, I am by no means opposed to advocating for an AI pause. It strikes me as something that one can reasonably conclude is helpful and worth doing (depending on one’s values and empirical judgement calls). But my current assessment is just that it is unlikely to be among the best ways to reduce future suffering, mainly because I view the alternative activities outlined above as being more promising, and because I suspect that most realistic AI pauses are unlikely to be clearly beneficial overall.

My thoughts on AI pause discourse

A related critical observation about much of the discourse around AI pause is that it tends toward a simplistic “doom vs. non-doom” dichotomy. That is, the picture that is conveyed seems to be that either humanity loses control of AI and goes extinct, which is bad; or humanity maintains control, which is good. And your probability of the former is your “p(doom)”.

Of course, one may argue that for strategic and communication purposes, it makes sense to simplify things and speak in such dichotomous terms. Yet the problem, in my view, is that this kind of picture is not accurate even to a first approximation. From an altruistic perspective, it is not remotely the case that “loss of control to AI” = “bad”, while “humans maintaining control” = “good”.

For example, if we are concerned with the reduction of s-risks (which is important by the lights of virtually all impartial value systems), we must compare the relative risks of “loss of control to AI” with the risks of “humans maintaining control” — however we define these rough categories. And sadly, it is not the case that “humans maintaining control” is associated with a negligible or trivial risk of worst-case outcomes. Indeed, it is not clear whether “humans maintaining control” is generally associated with better or worse prospects than “loss of control to AI” when it comes to s-risks.

In general, the question of whether a “human-controlled future” is better or worse with respect to reducing future suffering is a difficult one that has been discussed and debated at some length, and no clear consensus has emerged. As a case in point, Brian Tomasik places a 52 percent subjective probability on the claim that “Human-controlled AGI in expectation would result in less suffering than uncontrolled”.

This near-50/50 view stands in stark contrast to what often seems assumed as a core premise in much of the discourse surrounding AI pause, namely that a human-controlled future would obviously be far better (in expectation).

(Some reasons why one might be pessimistic regarding human-controlled futures can be found in the literature on human moral failings; see e.g. Cooper, 2018; Huemer, 2019; Kidd, 2020; Svoboda, 2022. Other reasons include basic competitive aims and dynamics that are likely to be found in a wide range of futures, including human-controlled ones; see e.g. Tomasik, 2013; Knutsson, 2022, sec. 3. See also Vinding, 2022.)

Massive moral urgency: Yes, in both categories of worst-case risks

There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.

At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks. (To be clear, this is not to say that an s-risk outcome is the most likely outcome in any of these two classes of future scenarios, but merely that the current trajectory looks highly suboptimal and concerning with respect to both of them.)

The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks, and as hinted earlier, it seems doubtful to me that pushing for an AI pause is the best way to reduce these risks overall.

Reasons to doubt that suffering is ontologically prevalent

It is sometimes claimed that we cannot know whether suffering is ontologically prevalent — for example, we cannot rule out that suffering might exist in microorganisms such as bacteria, or even in the simplest physical processes. Relatedly, it has been argued that we cannot trust common-sense views and intuitions regarding the physical basis of suffering.

I agree with the spirit of these arguments, in that I think it is true that we cannot definitively rule out that suffering might exist in bacteria or fundamental physics, and I agree that we have good reasons to doubt common-sense intuitions about the nature of suffering. Nevertheless, I think discussions of expansive views of the ontological prevalence of suffering often present a somewhat unbalanced and, in my view, overly agnostic view of the physical basis of suffering. (By “expansive views”, I do not refer to views that hold that, say, insects are sentient, but rather views that hold that suffering exists in considerably simpler systems, such as in bacteria or fundamental physics.)

While we cannot definitively rule out that suffering might be ontologically prevalent, I do think that we have strong reasons to doubt it, as well as to doubt the practical importance of this possibility. My goal in this post is to present some of these reasons.


Contents

  1. Counterexamples: People who do not experience pain or suffering
  2. Our emerging understanding of pain and suffering
  3. Practical relevance

Counterexamples: People who do not experience pain or suffering

One argument against the notion that suffering is ontologically prevalent is that we seem to have counterexamples in people who do not experience pain or suffering. For example, various genetic conditions seemingly lead to a complete absence of pain and/or suffering. This, I submit, has significant implications for our views of the ontological prevalence (or non-prevalence) of suffering.

After all, the brains of these individuals include countless subatomic particles, basic biological processes, diverse instances of information processing, and so on, suggesting that none of these are in themselves sufficient to generate pain or suffering.

One might object that the brains of such people could be experiencing suffering — perhaps even intense suffering — that these people are just not able to consciously access. Yet even if we were to grant this claim, it does not change the basic argument that generic processes at the level of subatomic particles, basic biology, etc. do not seem sufficient to create suffering. For the processes that these people do consciously access presumably still entail at least some (indeed probably countless) subatomic particles, basic biological processes, electrochemical signals, different types of biological cells, diverse instances of information processing, and so on. This gives us reason to doubt all views that see suffering as an inherent or generic feature of processes at any of these (quite many) respective levels.

Of course, this argument is not limited to people who are congenitally unable to experience suffering; it applies to anyone who is just momentarily free from noticeable — let alone significant — pain or suffering. Any experiential moment that is free from significant suffering is meaningful evidence against highly expansive views of the ontological prevalence of significant suffering.

Our emerging understanding of pain and suffering

Another argument against expansive views of the prevalence of suffering is that our modern understanding of the biology of suffering gives us reason to doubt such views. That is, we have gained an increasingly refined understanding of the evolutionary, genetic, and neurobiological bases of pain and suffering, and the picture that emerges is that suffering is a complex phenomenon associated with specific genes and neural structures (as exemplified by the above-mentioned genetic conditions that knock out pain and/or suffering).

To be sure, the fact that suffering is associated with specific genes and neural structures in animals does not imply that suffering cannot be created in other ways in other systems. It does, however, suggest that suffering is unlikely to be found in simple systems that do not have remote analogues of these specific structures (since we otherwise should expect suffering to be associated with a much wider range structures and processes, not such an intricate and narrowly delineated set).

By analogy, consider the experience of wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers. Do we have reason to believe that fundamental particles such as electrons, or microorganisms such as bacteria, might have such experiences? To go a step further, do we have reason to be agnostic as to whether electrons or bacteria might have such experiences?

These questions may seem too silly to merit contemplation. After all, we know that having a conscious desire to go to a concert for the purpose of online sharing requires rather advanced cognitive abilities that, at least in our case, are associated with extremely complex structures in the brain — not to mention that it requires an understanding of a larger cultural context that is far removed from the everyday concerns of electrons and bacteria. But the question is why we would see the case of suffering as being so different.

Of course, one might object that this is a bad analogy, since the experience described above is far more narrowly specified than is suffering as a general class of experience. I would agree that the experience described above is far more specific and unusual, but I still think the basic point of the analogy holds, in that our understanding is that suffering likewise rests on rather complex and specific structures (when it occurs in animal brains) — we might just not intuitively appreciate how complex and distinctive these structures are in the case of suffering, as opposed to in the Swift experience.

It seems inconsistent to allow ourselves to apply our deeper understanding of the Swift experience to strongly downgrade our credence in electron- or bacteria-level Swift experiences, while not allowing our deeper understanding of pain and suffering to strongly downgrade our credence in electron- or bacteria-level pain and suffering, even if the latter downgrade should be comparatively weaker (given the lower level of specificity of this broader class of experiences).

Practical relevance

It is worth stressing that, in the context of our priorities, the question is not whether we can rule out suffering in simple systems like electrons or bacteria. Rather, the question is whether the all-things-considered probability and weight of such hypothetical suffering is sufficiently large for it to merit any meaningful priority relative to other forms of suffering.

For example, one may hold a lexical view according to which no amount of putative “micro-discomfort” that we might ascribe to electrons or bacteria can ever be collectively worse than a single instance of extreme suffering. Likewise, even if one does not hold a strictly lexical view in theory, one might still hold that the probability of suffering in simple systems is so low that, relative to the expected prevalence of other kinds of suffering, it is so strongly dominated so as to merit practically no priority by comparison (cf. “Lexical priority to extreme suffering — in practice”).

After all, the risk of suffering in simple systems would not only have to be held up against the suffering of all currently existing animals on Earth, but also against the risk of worst-case outcomes that involve astronomical numbers of overtly tormented beings. In this broader perspective, it seems reasonable to believe that the risk of suffering in simple systems is massively dwarfed by the risk of such astronomical worst-case outcomes, partly because the latter risk seems considerably less speculative, and because it seems far more likely to involve the worst instances of suffering.

Relatedly, just as we should be open to considering the possibility of suffering in simple systems such as bacteria, it seems that we should also be open to the possibility that spending a lot of time contemplating this issue — and not least trying to raise concern for it — might be an enormous opportunity cost that will overall increase extreme suffering in the future (e.g. because it distracts people from more important issues, or because it pushes people toward dismissing suffering reducers as absurd or crazy).

To be clear, I am not saying that contemplating this issue in fact is such an opportunity cost. My point is simply that it is important not to treat highly speculative possibilities in a manner that is too one-sided, such that we make one speculative possibility disproportionately salient (e.g. there might be a lot of suffering in microorganisms or in fundamental physics), while neglecting to consider other speculative possibilities that may in some sense “balance out” the former (e.g. that prioritizing the risk of suffering in simple systems significantly increases extreme suffering).

In more general terms, it can be misleading to consider Pascallian wagers if we do not also consider their respective “counter-Pascallian” wagers. For example, what if believing in God actually overall increases the probability of you experiencing eternal suffering, such as by marginally increasing the probability that future people will create infinite universes that contain infinitely many versions of you that get tortured for life?

In this way, our view of Pascal’s wager may change drastically when we go beyond its original one-sided framing and consider a broader range of possibilities, and the same applies to Pascallian wagers relating to the purported suffering of simple entities like bacteria or electrons. When we consider a broader range of speculative hypotheses, it is hardly clear whether we should overall give more or less consideration to such simple entities than we currently do, at least when compared to how much consideration and priority we give to other forms of suffering.

Does digital or “traditional” sentience dominate in expectation?

My aim in this post is to critique two opposite positions that I think are both mistaken, or which at least tend to be endorsed with too much confidence.

The first position is that the vast majority of future sentient beings will, in expectation, be digital, meaning that they will be “implemented” in digital computers.

The second position is in some sense a rejection of the first one. Based on a skepticism of the possibility of digital sentience, this position holds that future sentience will not be artificial, but instead be “traditionally” biological — that is, most future sentient beings will, in expectation, be biological beings roughly as we know them today.

I think the main problem with this dichotomy of positions is that it leaves out a reasonable third option, which is that most future beings will be artificial but not necessarily digital.


Contents

  1. Reasons to doubt that digital sentience dominates in expectation
  2. Reasons to doubt that “traditional” biological sentience dominates in expectation
  3. Why does this matter?

Reasons to doubt that digital sentience dominates in expectation

One can roughly identify two classes of reasons to doubt that most future sentient beings will be digital.

First, there are object-level arguments against the possibility of digital sentience. For example, based on his physicalist view of consciousness, David Pearce argues that the discrete and disconnected bits of a digital computer cannot, if they remain discrete and disconnected, join together into a unified state of sentience. They can at most, Pearce argues, be “micro-experiential pixels”.

Second, regardless of whether one believes in the possibility of digital sentience, the future dominance of digital sentience can be doubted on the grounds that it is a fairly strong and specific claim. After all, even if digital sentience is perfectly possible, it by no means follows that future sentient beings will necessarily converge toward being digital.

In other words, the digital dominance position makes strong assumptions about the most prevalent forms of sentient computation in the future, and it seems that there is a fairly large space of possibilities that does not imply digital dominance, such as (a future predominance of) non-digital neuron-based computers, non-digital neuron-inspired computers, and various kinds of quantum computers that have yet to be invented.

When one takes these arguments into account, it at least seems quite uncertain whether digital sentience dominates in expectation, even if we grant that artificial sentience does.

Reasons to doubt that “traditional” biological sentience dominates in expectation

A reason to doubt that “traditional” sentience dominates is that, whatever one’s theory of sentience, it seems likely that sentience can be created artificially — i.e. in a way that we would deem artificial. (An example might be further developed and engineered versions of brain organoids.) Specifically, regardless of which physical processes or mechanisms we take to be critical to sentience, those processes or mechanisms can most likely be replicated in other systems than just live biological animals as we know them.

If we combine this premise with an assumption of continued technological evolution (which likely holds true in the future scenarios that contain the largest numbers of sentient beings), it overall seems doubtful that the majority of future beings will, in expectation, be “traditional” biological organisms — especially when we consider the prospect of large futures that involve space colonization.

More broadly, we have reason to doubt the “traditional” biological dominance position for the same reason that we have reason to doubt the digital dominance position, namely that the position entails a rather strong and specific claim along the lines that: “this particular class of sentient being is most numerous in expectation”. And, as in the case of digital dominance, it seems that there are many plausible ways in which this could turn out to be wrong, such as due to neuron-inspired or other yet-to-be-invented artificial systems that could become both sentient and prevalent.

Why does this matter?

Whether artificial sentience dominates in expectation plausibly matters for our priorities (though it is unclear how much exactly, since some of our most robust strategies for reducing suffering are probably worth pursuing in roughly the same form regardless). Yet those who take artificial sentience seriously might adopt suboptimal priorities and communication strategies if they primarily focus on digital sentience in particular.

At the level of priorities, they might restrict their focus to an overly narrow set of potentially sentient systems, and perhaps neglect the great majority of future suffering as a result. At the level of communication, they might needlessly hamper their efforts to raise concern for artificial sentience by mostly framing the issue in terms of digital sentience. This framing might lead people who are skeptical of digital sentience to mistakenly dismiss the broader issue of artificial sentience.

Similar points apply to those who believe that “traditional” biological sentience dominates in expectation: they, too, might restrict their focus to an overly narrow set of systems, and thereby neglect to consider a wide range of scenarios that may intuitively seem like science fiction, yet which nevertheless deserve serious consideration on reflection (e.g. scenarios that involve a large-scale spread of suffering due to space colonization).

In summary, there are reasons to doubt both the digital dominance position and the “traditional” biological dominance position. Moreover, it seems that there is something to be gained by not using the narrow term “digital sentience” to refer to the broader category of “artificial sentience”, and by being clear about just how much broader this latter category is.

A convergence of moral motivations

My aim in this post is to outline a variety of motivations that all point me in broadly the same direction: toward helping others in general and prioritizing the reduction of suffering in particular.


Contents

  1. Why list these motivations?
  2. Clarification
  3. Compassion
  4. Consistency
  5. Common sense: A trivial sacrifice compared to what others might gain
  6. The horror of extreme suffering: The “game over” motivation
  7. Personal identity: I am them
  8. Fairness
  9. Status and recognition
  10. Final reflections

Why list these motivations?

There are a few reasons why I consider it worthwhile to list this variety of moral motivations. For one, I happen to find it interesting to notice that my motivations for helping others are so diverse in their nature. (That might sound like a brag, but note that I am not saying that my motivations are necessarily all that flattering or unselfish.) This diversity in motivations is not obvious a priori, and it also seems different from how moral motivations are often described. For example, reasons to help others are frequently described in terms of a singular motivation, such as compassion.

Beyond mere interest, there may also be some psychological and altruistic benefits to identifying these motivations. For instance, if we realize that our commitment to helping others rests on a wide variety of motivations, this might in turn give us a greater sense that it is a robust commitment that we can be confident in, as opposed to being some brittle commitment that rests on just a single wobbly motivation.

Relatedly, if we have a sense of confidence in our altruistic commitment, and if we are aware that it rests on a broad set of motivations, this might also help strengthen and maintain this commitment. For example, one can speculate that it may be possible to tap into extra reserves of altruistic motivation by skillfully shifting between different sources of such motivation.

Another potential benefit of becoming more aware of, and drawing on, a greater variety of altruistic motivations is that they may each trigger different cognitive styles with their own unique benefits. For example, the patterns of thought and attention that are induced by compassion are likely different from those that are induced by a sense of rigorous impartiality, and these respective patterns might well complement each other.

Lastly, being aware of our altruistic motivations could help give us greater insight into our biases. For example, if we are strongly motivated by empathic concern, we might be biased toward mostly helping cute-looking beings who appeal to our empathy circuits, like kittens and squirrels, and toward downplaying the interests of beings who may look less cute, such as lizards and cockroaches. And note that such a bias can persist even if we are also motivated by impartiality at some level. Indeed, it is a recipe for bias to think that a mere cerebral endorsement of impartiality means that we will thereby adhere to impartiality at every level of our cognition. A better awareness of our moral motivations may help us avoid such naive mistakes.

Clarification

I should clarify that this post is not meant to capture everyone’s moral motivations, nor is my aim to convince people to embrace all the motivations I outline below. Rather, my intention is first and foremost to present the moral motivations that I myself am compelled by, and which all to some extent drive me to try to reduce suffering. That being said, I do suspect that many of these motivations will tend to resonate with others as well.

Compassion

Compassion has been defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. This is similar to having empathic concern for others (compassion is often regarded as a component of empathic concern).

In contrast to some of the other motivations listed below, compassion is less cerebral and more directly felt as a motivation for helping others. For example, when we experience sympathy for someone’s misery, we hardly need to go through a sequence of inferences in order to be motivated to alleviate that misery. The motivation to help is almost baked into the sympathy itself. Indeed, studies suggest that empathic concern is a significant driver of costly altruism.

In my own case, I think compassion tends to play an important role, though I would not claim that it is sufficient or even necessary for motivating the general approach that I would endorse when it comes to helping others. One reason it is not sufficient is that it needs to be coupled with a more systematic component, which I would broadly refer to as ‘consistency’.

Consistency

As a motivation for helping others, consistency is rather different from compassion. For example, unlike compassion, consistency is cerebral in nature, to the degree that it almost has a logical or deductive character. That is, unlike compassion, consistency per se does not highlight others’ suffering or welfare from the outset. Instead, efforts to help others are more a consequence of applying consistency to our knowledge about our own direct experience: I know that intense suffering feels bad and is worth avoiding for me (all else equal), and hence, by consistency, I conclude that intense suffering feels bad and is worth avoiding for everyone (all else equal).

One might object that it is not inconsistent to view one’s own suffering as being different from the suffering of others, such as by arguing that there are relevant differences between one’s own suffering and the suffering of others. I think there are several points to discuss back and forth on this issue. However, I will not engage in such arguments here, since my aim in this section is not to defend consistency as a moral motivation, but simply to present a rough outline as to how consistency can motivate efforts to help others.

As noted above, a consistency-based motivation for helping others does not strictly require compassion. However, in psychological terms, since none of us are natural consistency-maximizers, it seems likely that compassion will usually be helpful for getting altruistic motivations off the ground in practice. Conversely, as hinted in the previous section, compassion alone is not sufficient for motivating the most effective actions for helping others. After all, one can have a strong desire to reduce suffering without having the consistency-based motivation to treat equal suffering equally and to spend one’s limited resources accordingly.

In short, the respective motivations of compassion and consistency seem to each have unique benefits that make them worth combining, and I would say that they are both core pillars in my own motivations for helping others.

Common sense: A trivial sacrifice compared to what others might gain

Another motivation that appeals to me might be described as a commonsense motivation. That is, there is a vast number of sentient beings in the world, of which I am just one, and hence the beneficial impact that I can have on other sentient beings is vastly greater than the beneficial impact I can have on my own life. After all, once my own basic needs are met, there is probably little I can do to improve my wellbeing much further. Indeed, I will likely find it more meaningful and fulfilling to try to help others than to try to improve my own happiness (cf. the paradox of hedonism and the psychological benefits of having a prosocial purpose).

Of course, it is difficult to quantify just how much greater our impact on others might be compared to our impact on ourselves. Yet given the enormous number of sentient beings who exist around us, and given that our impact potentially reaches far into the future, it is not unreasonable to think that it could be greater by at least a factor of a million (e.g. we may prevent at least million times as many instances of similarly bad suffering in expectation for others than for ourselves).

In light of this massive difference in potential impact, it feels like a no-brainer to dedicate a significant amount of resources toward helping others, especially when my own basic needs are already met. Not doing so would amount to giving several orders of magnitude greater importance to my own wellbeing than to the wellbeing of others, and I see no justification for that. Indeed, one need not endorse anything close to perfect consistency and impartiality to believe that such a massively skewed valuation is implausible. It is arguably just common sense.

The horror of extreme suffering: The “game over” motivation

A particularly strong motivation for me is the sheer horror of extreme suffering. I refer to this as the “game over” motivation because that is my reaction when I witness cases of extreme suffering: a clear sense that nothing is more important than the prevention of such extreme horrors. Game over.

One might argue that this motivation is not distinct from compassion and empathic concern in the broadest sense. And I would agree that it is a species of that broad category of motivations. But I also think there is something distinctive about this “game over” motivation compared to generic empathic concern. For example, the “game over” motivation seems meaningfully different from the motivation to help someone who is struggling in more ordinary ways. In fact, I think there is a sense in which our common circuitry of sympathetic relating practically breaks down when it comes to extreme suffering. The suffering becomes so extreme and unthinkable that our “sympathometer” crashes, and we in effect check out. This is another reason it seems accurate to describe it as a “game over” motivation.

Where the motivations listed above all serve to motivate efforts to help others in general, the motivation described in this section is more of a driver as to what, specifically, I consider the highest priority when it comes to helping others, namely to alleviate and prevent extreme suffering.

Personal identity: I am them

Another motivation derives from what may be called a universal view of personal identity, also known as open individualism. This view entails that all sentient beings are essentially different versions of you, and that there is no deep sense in which the future consciousness-moments of your future self (in the usual narrow sense) is more ‘you’ than the future consciousness-moments of other beings.

Again, I will not try to defend this view here, as opposed to just describing how it can motivate efforts to help others (for a defense, see e.g. Kolak, 2004; Leighton, 2011, ch. 7; Vinding, 2017).

I happen to accept this view of personal identity, and in my opinion it ultimately leaves no alternative but to work for the benefit of all sentient beings. In light of open individualism, it makes no more sense to endorse narrow egoism than to, say, only care about one’s own suffering on Tuesdays. Both equally amount to an arbitrary disregard of my own suffering from an open individualist perspective.

This is one of the ways in which my motivations for helping others are not necessarily all that flattering: on a psychological level, I often feel that I am selfishly trying to prevent future versions of myself from being tortured, virtually none of whom will share my name.

I would say that the “I am them” motivation is generally a strong driver for me, not in a way that changes any of the basic upshots derived from the other motivations, but in a way that reinforces them.

Fairness

Considerations and intuitions related to fairness are also motivating to me. For example, I am lucky to have been born in a relatively wealthy country, and not least to have been born as a human rather than as a tightly confined chicken in a factory farm or a preyed-upon mouse in the wild. There is no sense in which I personally deserve this luck over those who are born in conditions of extreme misery and destitution. Consequently, it is only fair that I “pay back” my relative luck by working to help those beings who were or will be much less lucky in terms of their birth conditions and the like.

I should note that this is not among my stronger or more salient motivations, but I still think it has significant appeal and that it plays some role for me.

Status and recognition

Lastly, I want to highlight the motivation that any cynic would rightly emphasize, namely to gain status and recognition. Helping others can be a way to gain recognition and esteem among our peers, and I am obviously also motivated by that.

There is quite a taboo around acknowledging this motive, but I think that is a mistake. It is simply a fact about the human mind that we want recognition, and this is not necessarily a problem in and of itself. It only becomes a problem if we allow our drive for status to corrupt our efforts to help others, which is no doubt a real risk. Yet we hardly reduce that risk by pretending that we are unaffected by these drives. On the contrary, openly admitting our status motives probably gives us a better chance of mitigating their potentially corrupting influence.

Moreover, while our status drives can impede our altruistic efforts, we should not overlook the possibility that they might sometimes do the opposite, namely improve our efforts to help others.

How could that realistically happen? One way it might happen is by forcing us to seek out the assessments of informed people. That is, if our altruistic efforts are partly driven by a motive to impress relevant experts and evaluators of our work, we might be more motivated to consider and integrate a wider range of informed perspectives (compared to if we were not motivated to impress such evaluators).

Of course, this only works if we are indeed motivated to impress an informed audience, as opposed to just any audience that may be eager to throw recognition after us. Seeking the right audience to impress — those who are impressed by genuinely helpful contributions — might thus be key to making our status drives work in favor of our altruistic efforts rather than against them (cf. Hanson, 2010; 2018). 

Another reason to believe that status drives can be helpful is that they have proven to be psychologically potent for human beings. Hence, if we could hypothetically rob a human brain of its status drives, we might well reduce its altruistic drives overall, even if other sources of altruistic motivation were kept intact. It might be tantamount to removing a critical part of an engine, or at least a part that adds a significant boost.

In terms of my own motivations, I would say that drives for status probably often do help motivate my altruistic efforts, whether I endorse my status drives or not. Yet it is difficult to estimate the strength and influence of these drives. After all, the status motive is regarded as unflattering, and hence there are reasons to think that my mind systematically downplays its influence. Moreover, like all of the motivations listed here, the status motive likely varies in strength depending on contextual factors, such as whether I am around other people or not; I suspect that it becomes weaker when I am more isolated, which in effect suggests a way to reduce my status drives when needed.

I should also note that I aspire to view my status drives with extreme suspicion. Despite my claims about how status drives could potentially be helpful, I think the default — if we do not make an intense effort to hone and properly direct our status drives — is that they distort our efforts to help others. And I think the endeavor of questioning our status drives tends to be extremely difficult, not least since status-seeking behavior can take myriad forms that do not look or feel anything like status-seeking behavior. It might just look like “conforming to the obviously reasonable views of my peers”, or like “pursuing this obscure and interesting idea that somehow feels very important”.

So a key question I try to ask myself is: am I really trying to help sentient beings, or am I mostly trying to raise my personal status? And I strive to look at my professed answers with skepticism. Fortunately, I feel that the “I am them” motivation can be a powerful tool in this regard. It essentially forces the selfish parts of my mind to ask: do I really want to gain status more than I want to prevent my future self from being tortured? If not, then I have strong reasons to try to reduce any torture-increasing inefficiencies that might be introduced by my status motives, and to try, if possible, to harness my status motives in the direction of reducing my future torment.

Final reflections

The motivations described above make up quite a complicated mix, from other-oriented compassion and fairness to what feels more like a self-oriented motivation aimed at sparing myself (in an expansive sense) from extreme suffering. I find it striking just how diverse these motivations are, and how they nonetheless — from so seemingly different starting points — can end up converging toward roughly the same goal: to reduce suffering for all sentient beings.

For me, this convergence makes the motivation to help others feel akin to a rope that is weaved from many complementary materials: even if one of the strings is occasionally weakened, the others can usually still hold the rope together.

But again, it is worth stressing that the drive for status is somewhat of an exception, in that it takes serious effort to make this drive converge toward aims that truly help other sentient beings. More generally, I think it is important to never be complacent about the potential for our status drives to corrupt our motivations to help others, even if we feel like we are driven by a strong and diverse set of altruistic motivations. Status drives are like the One Ring: powerful yet easily corrupting, and they are probably best viewed as such.

Minimalist versions of objective list theories of wellbeing

My colleague Teo Ajantaival is currently writing an essay on minimalist views of wellbeing, i.e. views according to which wellbeing ultimately consists in the minimization of one or more sources of illbeing. My aim in this post is to sketch out a couple of related points about objective list theories of wellbeing.

I should note that objective list theories of wellbeing are not necessarily the ones that I myself consider most plausible, but I still think it is worth highlighting how one can endorse minimalist versions of objective list views, which are arguably the most plausible versions of these views.


Contents

  1. Objective list theories of wellbeing
  2. Harms of premature death
  3. A possible foundation for a negative utilitarian view
  4. Concluding remarks

Objective list theories of wellbeing

In their typical formulations, objective list theories say that wellbeing consists in having a variety of objective goods in one’s life. These purported objective goods could include knowledge, health, virtuous conduct, personal achievements, and autonomy. Note that a key claim of objective list views is that these purported goods contribute independently to a person’s wellbeing, and not merely by means of satisfying our desires or improving our hedonic states.

Minimalist versions of objective list theories can be largely equivalent to standard versions of these theories, in the sense that they may include essentially the same list of objective goods, except that these “goods” are construed in terms of the absence of bads. That is, minimalist versions of objective list theories understand wellbeing as consisting in the absence of objective bads, rather than consisting in the presence of objective goods (which do not exist on the minimalist conception of wellbeing).

For example, rather than seeing autonomy as an objective good that can bring our wellbeing above some neutral level, the absence of autonomy is seen as an objective bad that detracts from our wellbeing, placing us below a neutral or unproblematic state of wellbeing; and having full autonomy can at most bring us to an untroubled or unproblematic level of wellbeing.

Similarly, rather than seeing health as an objective good that takes us above a neutral or unproblematic state, the lack of health is seen as an objective bad, and complete health can at most bring us to an untroubled level of wellbeing. Rather than seeing virtue as an objective good that contributes positively to wellbeing, vice is seen as an objective bad that contributes negatively, and virtue may be understood as the mere absence of vice (cf. Kupfer, 2011; Knutsson, 2022, sec. 4). And so on for any other purported objective good.

Harms of premature death

It is worth noting that minimalist versions of objective list views can support the view that premature death is bad, and they can do so in many ways. For not only may these views consider premature death to be bad because it entails many other objective bads (e.g. death would prevent us from completing our life projects), but these views may also see premature death itself as an objective bad. Minimalist objective list views may thus see a far greater harm in death than do more optimistic views of wellbeing.

A possible foundation for a negative utilitarian view

Note also how these minimalist views could be incorporated into a version of utilitarianism that might be more intuitive than most other forms of utilitarianism. That is, minimalist objective list views could form the basis of a negative utilitarian view that says that we ought to minimize illbeing, understood as the minimization of the independent bads that contribute to illbeing.

Such views can avoid many of the counterintuitive implications of classical utilitarianism — e.g. that we should force people to bring about new happy beings in hypothetical worlds where nobody wants to create such beings, even at the price of increasing extreme suffering — while also avoiding the conclusion that early death is always morally best for any individual’s own sake in isolation, as implied by some other forms of negative utilitarianism.

Of course, minimalist theories of wellbeing are not tied to any particular view of ethics, but this ethics-related point seems worth stressing since discussions of negative utilitarianism often overlook the possibility of basing utilitarianism on the theories of wellbeing outlined above.

Concluding remarks

My aim in this post has not been to provide arguments in favor of minimalist objective list theories over competing “objective goods” theories of wellbeing. Such arguments could seek to establish that it is more plausible that the purported objective goods found in objective list theories are in fact objective bads to be avoided, or they could seek to establish that purported objective goods only contribute instrumentally to wellbeing by reducing objective bads. Yet such arguments are beyond the scope of this brief post, whose aim has been of a more modest nature, namely to draw attention to a group of minimalist views that is often overlooked.

Minimalist views can be construed in many different ways and can accommodate a wide range of intuitions, which makes them a far richer and more flexible class of views than is commonly acknowledged. Consequently, it is worth avoiding the common mistake of dismissing all minimalist views with reference to arguments that only apply to a relatively narrow subset of these views.

Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


Contents

  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part; we find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

Four reasons to help nearby insects even if your main priority is to reduce s-risks

When trying to reduce suffering in effective ways, one can easily get pulled toward an abstract focus that pertains almost exclusively to speculative far future scenarios. There are, to be sure, good reasons to work to reduce risks of astronomical future suffering, or s-risks. Yet even if the reduction of s-risks is our main priority, there are still compelling reasons to also focus on helping beings in our immediate surroundings, such as insects and other small beings. My aim in this post is to list a few of these reasons.

I should note that most of the points I make in this essay pertain to all sentient beings who may cross our paths — not just insects — but I still want to emphasize insects because we encounter them so often, and because they tend to be uniquely at our mercy.


Contents

  1. Helping nearby insects is often trivially cheap and worthwhile
  2. Helping nearby insects can reinforce our dedication and commitment to suffering reduction
  3. Helping nearby insects can help foster greater concern for neglected beings
  4. Helping nearby insects can prevent suffering reduction from becoming unduly abstract
  5. Further reading

Helping nearby insects is often trivially cheap and worthwhile

Perhaps the main argument against helping beings in our vicinity, from an altruistic perspective, is that the opportunity costs are too high in terms of what we could be doing to help future beings.

I think this is an important argument, as the opportunity costs are surely worth keeping in mind, and they can indeed be high. But that being said, it is also true that many efforts to help insects in our vicinity are extremely cheap, and thus carry practically no costs to our efforts to reduce future suffering. Indeed, as I will argue below, efforts to help beings in our vicinity may generally have beneficial effects on our efforts to reduce future suffering.

Helping nearby insects can reinforce our dedication and commitment to suffering reduction

Small-scale acts of beneficence toward insects can plausibly help to reinforce a sense of commitment to the reduction of suffering, including the reduction of s-risks. Specifically, such small-scale acts may help strengthen a sense of self-identity that is centered on suffering reduction as a core purpose, and pursuing these compassionate acts may be seen as a uniquely tangible step in line with that purpose — a concrete step toward a world with less suffering.

Helping nearby insects can help foster greater concern for neglected beings

In addition to fostering greater concern for suffering, efforts to help insects may likewise foster greater concern for small beings in particular. This is important since small beings such as insects are extremely numerous and neglected, and also since a large fraction of future suffering is likely to occur in similarly exotic sentient minds.

Thus, when we perform small-scale actions that are aligned with a concern for tiny creatures with foreign minds, we plausibly make ourselves better able to take the interests of such minds seriously as a general matter. Conversely, if we act in ways that disregard these beings, we may be more inclined to rationalize the harms that befall them (by analogy to how eating certain animals can lead us to deny and disregard their sentience and their interests).

Helping nearby insects can prevent suffering reduction from becoming unduly abstract

A danger of working to reduce far future suffering is that it can end up resembling a game of speculative abstractions that have little to do with real-world suffering and real-world efforts to help others. To be clear, I think theoretical work on how we can best reduce future suffering is extremely important, and I have argued elsewhere that research is often more important than direct action in efforts to improve the world. Yet there is nevertheless a risk that such research-related work ends up being overly abstract, and that the reduction of suffering ends up being a problem that we mostly think and talk about, as opposed to it being a problem that we are above all striving to do something about.

Efforts to prevent harm to nearby insects may help reinforce this action-centered approach. In particular, it can help firmly establish the reduction of suffering as something that we do, and not something that we can only achieve in meaningful ways by thinking about risks of future suffering.

This argument in effect turns on its head a common objection against a focus on helping insects, since it is sometimes objected that a focus on helping insects is unduly abstract and detached. Yet there is nothing inherently abstract or detached about helping insects. It can be quite the opposite.

Further reading

Some suggestions on how to reduce the suffering of insects, including the suffering of insects in our vicinity, can be found here. As the author stresses toward the end, it is important not to make the mistake of spending too much effort on these suggestions; it is indeed critical to keep competing priorities and opportunity costs in mind.

For some considerations on prioritizing short-term versus long-term suffering, see:

Reasons to include insects in animal advocacy

I have seen some people claim that animal activists should primarily be concerned with certain groups of numerous vertebrates, such as chickens and fish, whereas we should not be concerned much, if at all, with insects and other small invertebrates. (See e.g. here.) I think there are indeed good arguments in favor of emphasizing chickens and fish in animal advocacy, yet I think those same arguments tend to support a strong emphasis on helping insects as well. My aim in this post is to argue that we have compelling reasons to include insects and other small vertebrates in animal advocacy.


Contents

  1. A simplistic sequence argument: Smaller beings in increasingly large numbers
    1. The sequence
    2. Why stop at chickens or fish?
  2. Invertebrate vs. vertebrate nervous systems
    1. Phylogenetic distance
    2. Behavioral and neurological evidence
    3. Nematodes and extended sequences
  3. Objection based on appalling treatment
  4. Potential biases
    1. Inconvenience bias
    2. Smallness bias
    3. Disgust and fear reflexes
    4. Momentum/status quo bias
  5. Other reasons to focus more on small invertebrates
    1. Neglectedness
    2. Opening people’s eyes to the extent of suffering and harmful decisions
    3. Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion
    4. Qualifications and counter-considerations
  6. My own view on strategy in brief
  7. Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering
  8. Acknowledgments

A simplistic sequence argument: Smaller beings in increasingly large numbers

As a preliminary motivation for the discussion, it may be helpful to consider the sequence below.

I should first of all clarify what I am not claiming in light of the following sequence. I am not making any claims about the moral relevance of the neuron counts of individual beings or groups of beings (that is a complicated issue that defies simple answers). Nor am I claiming that we should focus mostly on helping beings such as land arthropods and nematodes. The claim I want to advance is a much weaker one, namely that, in light of the sequence below, it is hardly obvious that we should focus mostly on helping chickens or fish.

The sequence

At any given time, there are roughly:

  • 780 million farmed pigs, with an estimated average neuron count of 2.2 billion. Total neuron count: ~1.7 * 10^18.
  • 33 billion farmed chickens, with an estimated average neuron count of 200 million. Total neuron count: ~6.6 * 10^18.
  • 10^15 fish (the vast majority of whom are wild fish), with an estimated average neuron count of 1 million neurons (this number lies between the estimated neuron count of a larval zebrafish and an adult zebrafish; note that there is great uncertainty in all these estimates). Total neuron count: ~10^21. It is estimated that humanity kills more than a trillion fish a year, and if we assume that they likewise have an average neuron count of around 1 million neurons, the total neuron count of these beings is ~10^18.
  • 10^19 land arthropods, with an estimated average neuron count of 15,000 neurons (some insects have brains with more than a million neurons, but most arthropods appear to have considerably fewer neurons). Total neuron count: ~1.5*10^23. If humanity kills roughly the same proportion of land arthropods as the proportion of fish that we kill (e.g. through insecticides and insect farming), then the total neuron count of the land arthropods we kill is ~10^20.
  • 10^21 nematodes, with an estimated average neuron count of 300 neurons. Total neuron count: ~3 * 10^23.

Why stop at chickens or fish?

The main argument that supports a strong emphasis on chickens or fish is presumably their large numbers (as well as their poor treatment, which I discuss below). Yet the numbers-based argument that supports a strong emphasis on chickens and fish could potentially also support a strong emphasis on small invertebrates such as insects. It is thus not clear why we should place a strict boundary right below chickens or fish beyond which this numbers-based argument no longer applies. After all, each step of this sequence entails a similar pattern in terms of crude numbers: we have individual beings who on average have 1-3 orders of magnitude fewer neurons yet who are 1-5 orders of magnitude more numerous than the beings in the previous step.

Invertebrate vs. vertebrate nervous systems

A defense that one could give in favor of placing a relatively strict boundary below fish is that we here go from vertebrates to invertebrates, and we can be significantly less sure that invertebrates suffer compared to vertebrates.

Perhaps this defense has some force. But how much? Our confidence that the beings in this sequence have the capacity to suffer should arguably decrease at least somewhat in each successive step, yet should the decrease in confidence from fish to insects really be that much bigger than in the previous steps?

Phylogenetic distance

Based on the knowledge that we ourselves can suffer, one might think that a group of beings’ phylogenetic distance from us (i.e. how distantly related they are to us) can provide a tentative prior as to whether those beings can suffer, and regarding how big a jump in confidence we should make for different kinds of beings. Yet phylogenetic distance per se arguably does not support a substantially greater decrease in confidence in the step from fish to insects compared to the previous steps in the sequence above. 

The last common ancestor of humans and insects appears to have lived around 575 million years ago, whereas the last common ancestor of humans and fish lived around 400-485 million years ago (depending on the species of fish; around 420-460 million years for the most numerous fish). By comparison, the last common ancestor of humans and chickens lived around 300 million years ago, while the last common ancestor of humans and pigs lived around 100-125 million years ago.

Thus, when we look at different beings’ phylogenetic distance from humans in these temporal terms, it does not seem that the step between fish and insects (in the sequence above) is much larger than the step between fish and chickens or between chickens and pigs. In each case, the increase in the “distance” appears to be something like 100-200 million years.

Behavioral and neurological evidence

Of course, “phylogenetic distance from humans” does not represent strong evidence as to whether a group of beings has the capacity to suffer. After all, humans are more closely related to starfish (~100 neurons) than to octopuses (~500 million neurons), and we have much stronger reasons to think that the latter can suffer, based on behavioral and neurological evidence (cf. the Cambridge Declaration on Consciousness).

Does such behavioral and neurological evidence support a uniquely sharp drop in confidence regarding insect sentience compared to fish sentience? Arguably not, as there is mounting evidence of pain in (small) invertebrates, both in terms of behavioral and neuroscientific evidence. Additionally, there are various commonalities in the respective structures and developments of arthropod and vertebrate brains.

In light of this evidence, it seems that a sharp drop in confidence regarding pain in insects (versus pain in fish) requires a justification.

Nematodes and extended sequences

I believe that a stronger decrease in confidence is warranted when comparing arthropods and nematodes, for a variety of reasons: the nematode nervous system consists primarily of a so-called nerve ring, which is quite distinct from the brains of arthropods, and unlike the neurons of arthropods (and other animals), nematode neurons do not have action potentials or orthologs of sodium-channels (e.g. Nav1 and Nav2), which appear to play critical roles to pain signaling in other animals.

However, the evidence of pain in nematodes should not be understated either. The probability of pain in nematodes still seems non-negligible, and it arguably justifies substantial concern for (the risk of) nematode pain, even if it does not overall warrant a similarly strong concern and priority as does the suffering of chickens, fish, and arthropods.

This discussion also hints at why the sequence argument above need not imply that we should primarily focus on risks of suffering in bacteria or atoms, as one may reasonably hold that the probability of such suffering decreases by a greater rate than the number of the purported sufferers increases in such extended sequences.

Objection based on appalling treatment

Another reason one could give in favor of focusing on chickens and fish is that they are treated in particularly appalling ways, e.g. they are often crammed in extremely small spaces and killed in horrific ways. I agree that humanity’s abhorrent treatment of chickens and fish is a strong additional reason to prioritize helping them. Yet it seems that this same argument also favors a focus on insects.

After all, humanity poisons vast numbers of insects with insecticides that may cause intensely painful deaths, and in various insect farming practices — which are sadly growing — insects are commonly boiled, fried, or roasted alive. These practices seem no less cruel and appalling than the ways in which we treat and kill chickens and fish.

Potential biases

There are many reasons to expect that we are biased against giving adequate moral consideration to small invertebrates such as insects (in addition to our general speciesist bias). The four plausible biases listed below are by no means exhaustive.

Inconvenience bias

It is highly inconvenient if insects can feel pain, as it would imply that 1) we should be concerned about far more beings, which greatly complicates our ethical and strategic considerations (compared to if we just focused on vertebrates); 2) the extent of pain and suffering in the world is far greater than we would otherwise have thought, which may be a painful conclusion to accept; and 3) we should take far greater care not to harm insects in our everyday lives. All these inconveniences likely motivate us to conclude that insects are not sentient or that they are not that important in the bigger picture.

Smallness bias

Insects tend to be rather small, even compared to fish, which might make us reluctant to grant them moral consideration. In other words, our intuitions plausibly display a general sizeist bias. As a case in point, ants have more than twice as many neurons as lobsters, and there does not seem to be any clear reason to think that ants are less able to feel pain than are lobsters. Yet ants are obviously much smaller than lobsters, which may explain why people seem to show considerably more concern for lobsters than for ants, and why the number of people who believe that lobsters can feel pain (more than 80 percent in a UK survey) is significantly larger than the number of people who believe that ants can feel pain (around 56 percent). Of course, this pattern may also be partially explained by the inconvenience bias, since the acceptance of pain in lobsters seems less inconvenient than does the acceptance of pain in ants; but size likely still plays a significant role. (See also Vinding, 2015, “A Short Note on Insects”.)

Disgust and fear reflexes

It seems that many people have strong disgust reactions to (at least many) small invertebrates, such as cockroaches, maggots, and spiders. Some people may also feel fear toward these animals, or at least feel that they are nuisance. Gut reactions of this kind may well influence our moral evaluations of small invertebrates in general, even though they ideally should not.

Momentum/status quo bias

The animal movement has not historically focused on invertebrates, and hence there is little momentum in favor of focusing on their plight. That is, our status quo bias seems to favor a focus on helping the vertebrates whom the animal movement have traditionally focused on. To be sure, status quo bias also works against concern for fish and chickens to some degree (which is worth controlling for as well), yet chickens and fish have still received considerably more focus from the animal movement, and hence status quo bias likely negates concern for insects to an even stronger extent.

These biases should give us pause when we are tempted to reflexively dismiss the suffering of small invertebrates.

Other reasons to focus more on small invertebrates

In addition to the large number of arthropods and the evidence for arthropod pain, what other reasons might support a greater focus on small invertebrates?

Neglectedness

An obvious reason is the neglect of these beings. As hinted in the previous section, a focus on helping small invertebrates has little historical momentum, and it is still extremely neglected in the broader animal movement today. This seems to me a fairly strong reason to focus more on invertebrates on the margin, or at the very least to firmly include invertebrates in one’s advocacy.

Opening people’s eyes to the extent of suffering and harmful decisions

Another, perhaps less obvious reason is that concern for smaller beings such as insects might help reduce risks of astronomical suffering. This claim should immediately raise some concerns about suspicious convergence, and as I have argued elsewhere, there is indeed a real risk that expanding the moral circle could increase rather than reduce future suffering. Partly for this reason, it might be better to promote a deeper concern for suffering than to promote wider moral circles (see also Vinding, 2020, ch. 12).

Yet that being said, I also think there is a sense in which wider moral circles can help promote a deeper concern for suffering, and not least give people a more realistic picture of the extent of suffering in the world. Simply put, a moral outlook that includes other vertebrates besides humans will see far more severe suffering and struggle in the world, and a perspective that also includes invertebrates will see even more suffering still. Indeed, not only does such an outlook open one’s eyes to more existing suffering, but it may also open one’s eyes (more fully) to humanity’s capacity to ignore suffering and to make decisions that actively increase it, even today.

Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion

Another way in which greater concern for invertebrate suffering might reduce risks of astronomical suffering is that small invertebrates seem to be among the animals who are most likely to be sent into space on a large scale in the future (e.g. because they may survive better in extreme environments). Indeed some invertebrates — including fruit flies, crickets, and wasps — have already been sent into space, and some tardigrades were even sent to the moon (though the spacecraft crashed and probably none survived). Hence, the risk of spreading animals to space plausibly gives us additional reason to include insects in animal advocacy.

Qualifications and counter-considerations

To be clear, the considerations reviewed above merely push toward increasing the emphasis that we place on small beings such as insects — they are not necessarily decisive reasons to give primary focus to those beings. In particular, these arguments do not make a case for focusing on helping insects over, say, new kinds of beings who might be created in the future in even larger numbers.

It is also worth noting that there may be countervailing reasons not to emphasize insects more. One is that it could risk turning people away from the plight of non-human animals and the horror of suffering, which many people might find difficult to relate to if insect suffering constitutes the main focus at a practical level. This may be a reason to favor a greater focus on the suffering of larger and (for most people) more relatable animals.

I think the considerations on both sides need to be taken into account, including considerations about future beings who may become even more numerous and more neglected than insects. The upshot, to my mind, is that while focusing primarily on helping insects is probably not the best way to reduce suffering (for most of us), it still seems likely that 1) promoting greater concern for insects, as well as 2) promoting concrete policies that help insects, both constitute a significant part of the optimal portfolio of aims to push for.

My own view on strategy in brief

While questions about which beings seem most worth helping (on the margin) can be highly relevant for many of our decisions, there are also many strategic decisions that do not depend critically on how we answer these questions.

Indeed, my own view on strategies for reducing animal suffering is that we generally do best by pursuing robust and broad strategies that help many beings simultaneously, without focusing too narrowly on any single group of beings. (Though as hinted above, I think there are many situations where it makes sense to focus on interventions that help specific groups of beings.)

This is one of the reasons why I tend to favor an antispeciesist approach to animal advocacy, with a particular emphasis on the importance of suffering. Such an approach is still compatible with highlighting the scale and neglectedness of the suffering of chickens, fish, and insects, as well as the scale and neglectedness of wild-animal suffering. That is, a general approach thoroughly “scope-informed” about the realities on the ground.

And such a comprehensive approach seems further supported when we consider risks of astronomical suffering (despite the potential drawbacks alluded to earlier). In particular, when trying to help other animals today, it is worth asking how our efforts might be able to help future beings as well, since failing to do so could be a lost opportunity to spare large numbers of beings from suffering. (For elaboration, see “How the animal movement could do even more good” and Vinding, 2022, sec. 10.8-10.9.)

Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering

An objection against numbers-based arguments for focusing more on insects is that small pains, or a high probability of small pains, cannot be aggregated to be worse than extreme suffering.

I agree with the view that small pains do not add up to be worse than extreme suffering, yet I think it is mistaken to think that this view undermines any numbers-based argument for emphasizing insects more in animal advocacy. The reason, in short, is that we should also assign some non-negligible probability to the possibility that insects experience extreme suffering (e.g. in light of the evidence for pain in insects cited above). And this probability, combined with the very large number of insects, implies that there are many instances of extreme suffering occurring among insects in expectation. After all, the vast number of insects should lead us to believe that there are many beings who have experiences at the (expected) tail-end of the very worst experiences that insects can have.

As a concluding thought experiment that may challenge comfortable notions regarding the impossibility of intense pain among insects, consider that you were given the choice between A) living as a chicken inside a tiny battery cage for a full day, or B) being continually born and reborn as an insect who has the experience of being burned or crushed alive, for a total of a million days (for concreteness, you may imagine that you will be reborn as a butterfly like the one pictured at the top of this post).

If we were really given this choice, I doubt that we would consider it an easy choice in favor of B. I doubt that we would dismiss the seriousness of the worst insect suffering.

Acknowledgments

For their helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

Blog at WordPress.com.

Up ↑