Reasons to doubt that suffering is ontologically prevalent

It is sometimes claimed that we cannot know whether suffering is ontologically prevalent — for example, we cannot rule out that suffering might exist in microorganisms such as bacteria, or even in the simplest physical processes. Relatedly, it has been argued that we cannot trust common-sense views and intuitions regarding the physical basis of suffering.

I agree with the spirit of these arguments, in that I think it is true that we cannot definitively rule out that suffering might exist in bacteria or fundamental physics, and I agree that we have good reasons to doubt common-sense intuitions about the nature of suffering. Nevertheless, I think discussions of expansive views of the ontological prevalence of suffering often present a somewhat unbalanced and, in my view, overly agnostic view of the physical basis of suffering. (By “expansive views”, I do not refer to views that hold that, say, insects are sentient, but rather views that hold that suffering exists in considerably simpler systems, such as in bacteria or fundamental physics.)

While we cannot definitively rule out that suffering might be ontologically prevalent, I do think that we have strong reasons to doubt it, as well as to doubt the practical importance of this possibility. My goal in this post is to present some of these reasons.


Contents

  1. Counterexamples: People who do not experience pain or suffering
  2. Our emerging understanding of pain and suffering
  3. Practical relevance

Counterexamples: People who do not experience pain or suffering

One argument against the notion that suffering is ontologically prevalent is that we seem to have counterexamples in people who do not experience pain or suffering. For example, various genetic conditions seemingly lead to a complete absence of pain and/or suffering. This, I submit, has significant implications for our views of the ontological prevalence (or non-prevalence) of suffering.

After all, the brains of these individuals include countless subatomic particles, basic biological processes, diverse instances of information processing, and so on, suggesting that none of these are in themselves sufficient to generate pain or suffering.

One might object that the brains of such people could be experiencing suffering — perhaps even intense suffering — that these people are just not able to consciously access. Yet even if we were to grant this claim, it does not change the basic argument that generic processes at the level of subatomic particles, basic biology, etc. do not seem sufficient to create suffering. For the processes that these people do consciously access presumably still entail at least some (indeed probably countless) subatomic particles, basic biological processes, electrochemical signals, different types of biological cells, diverse instances of information processing, and so on. This gives us reason to doubt all views that see suffering as an inherent or generic feature of processes at any of these (quite many) respective levels.

Of course, this argument is not limited to people who are congenitally unable to experience suffering; it applies to anyone who is just momentarily free from noticeable — let alone significant — pain or suffering. Any experiential moment that is free from significant suffering is meaningful evidence against highly expansive views of the ontological prevalence of significant suffering.

Our emerging understanding of pain and suffering

Another argument against expansive views of the prevalence of suffering is that our modern understanding of the biology of suffering gives us reason to doubt such views. That is, we have gained an increasingly refined understanding of the evolutionary, genetic, and neurobiological bases of pain and suffering, and the picture that emerges is that suffering is a complex phenomenon associated with specific genes and neural structures (as exemplified by the above-mentioned genetic conditions that knock out pain and/or suffering).

To be sure, the fact that suffering is associated with specific genes and neural structures in animals does not imply that suffering cannot be created in other ways in other systems. It does, however, suggest that suffering is unlikely to be found in simple systems that do not have remote analogues of these specific structures (since we otherwise should expect suffering to be associated with a much wider range structures and processes, not such an intricate and narrowly delineated set).

By analogy, consider the experience of wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers. Do we have reason to believe that fundamental particles such as electrons, or microorganisms such as bacteria, might have such experiences? To go a step further, do we have reason to be agnostic as to whether electrons or bacteria might have such experiences?

These questions may seem too silly to merit contemplation. After all, we know that having a conscious desire to go to a concert for the purpose of online sharing requires rather advanced cognitive abilities that, at least in our case, are associated with extremely complex structures in the brain — not to mention that it requires an understanding of a larger cultural context that is far removed from the everyday concerns of electrons and bacteria. But the question is why we would see the case of suffering as being so different.

Of course, one might object that this is a bad analogy, since the experience described above is far more narrowly specified than is suffering as a general class of experience. I would agree that the experience described above is far more specific and unusual, but I still think the basic point of the analogy holds, in that our understanding is that suffering likewise rests on rather complex and specific structures (when it occurs in animal brains) — we might just not intuitively appreciate how complex and distinctive these structures are in the case of suffering, as opposed to in the Swift experience.

It seems inconsistent to allow ourselves to apply our deeper understanding of the Swift experience to strongly downgrade our credence in electron- or bacteria-level Swift experiences, while not allowing our deeper understanding of pain and suffering to strongly downgrade our credence in electron- or bacteria-level pain and suffering, even if the latter downgrade should be comparatively weaker (given the lower level of specificity of this broader class of experiences).

Practical relevance

It is worth stressing that, in the context of our priorities, the question is not whether we can rule out suffering in simple systems like electrons or bacteria. Rather, the question is whether the all-things-considered probability and weight of such hypothetical suffering is sufficiently large for it to merit any meaningful priority relative to other forms of suffering.

For example, one may hold a lexical view according to which no amount of putative “micro-discomfort” that we might ascribe to electrons or bacteria can ever be collectively worse than a single instance of extreme suffering. Likewise, even if one does not hold a strictly lexical view in theory, one might still hold that the probability of suffering in simple systems is so low that, relative to the expected prevalence of other kinds of suffering, it is so strongly dominated so as to merit practically no priority by comparison (cf. “Lexical priority to extreme suffering — in practice”).

After all, the risk of suffering in simple systems would not only have to be held up against the suffering of all currently existing animals on Earth, but also against the risk of worst-case outcomes that involve astronomical numbers of overtly tormented beings. In this broader perspective, it seems reasonable to believe that the risk of suffering in simple systems is massively dwarfed by the risk of such astronomical worst-case outcomes, partly because the latter risk seems considerably less speculative, and because it seems far more likely to involve the worst instances of suffering.

Relatedly, just as we should be open to considering the possibility of suffering in simple systems such as bacteria, it seems that we should also be open to the possibility that spending a lot of time contemplating this issue — and not least trying to raise concern for it — might be an enormous opportunity cost that will overall increase extreme suffering in the future (e.g. because it distracts people from more important issues, or because it pushes people toward dismissing suffering reducers as absurd or crazy).

To be clear, I am not saying that contemplating this issue in fact is such an opportunity cost. My point is simply that it is important not to treat highly speculative possibilities in a manner that is too one-sided, such that we make one speculative possibility disproportionately salient (e.g. there might be a lot of suffering in microorganisms or in fundamental physics), while neglecting to consider other speculative possibilities that may in some sense “balance out” the former (e.g. that prioritizing the risk of suffering in simple systems significantly increases extreme suffering).

In more general terms, it can be misleading to consider Pascallian wagers if we do not also consider their respective “counter-Pascallian” wagers. For example, what if believing in God actually overall increases the probability of you experiencing eternal suffering, such as by marginally increasing the probability that future people will create infinite universes that contain infinitely many versions of you that get tortured for life?

In this way, our view of Pascal’s wager may change drastically when we go beyond its original one-sided framing and consider a broader range of possibilities, and the same applies to Pascallian wagers relating to the purported suffering of simple entities like bacteria or electrons. When we consider a broader range of speculative hypotheses, it is hardly clear whether we should overall give more or less consideration to such simple entities than we currently do, at least when compared to how much consideration and priority we give to other forms of suffering.

Does digital or “traditional” sentience dominate in expectation?

My aim in this post is to critique two opposite positions that I think are both mistaken, or which at least tend to be endorsed with too much confidence.

The first position is that the vast majority of future sentient beings will, in expectation, be digital, meaning that they will be “implemented” in digital computers.

The second position is in some sense a rejection of the first one. Based on a skepticism of the possibility of digital sentience, this position holds that future sentience will not be artificial, but instead be “traditionally” biological — that is, most future sentient beings will, in expectation, be biological beings roughly as we know them today.

I think the main problem with this dichotomy of positions is that it leaves out a reasonable third option, which is that most future beings will be artificial but not necessarily digital.


Contents

  1. Reasons to doubt that digital sentience dominates in expectation
  2. Reasons to doubt that “traditional” biological sentience dominates in expectation
  3. Why does this matter?

Reasons to doubt that digital sentience dominates in expectation

One can roughly identify two classes of reasons to doubt that most future sentient beings will be digital.

First, there are object-level arguments against the possibility of digital sentience. For example, based on his physicalist view of consciousness, David Pearce argues that the discrete and disconnected bits of a digital computer cannot, if they remain discrete and disconnected, join together into a unified state of sentience. They can at most, Pearce argues, be “micro-experiential pixels”.

Second, regardless of whether one believes in the possibility of digital sentience, the future dominance of digital sentience can be doubted on the grounds that it is a fairly strong and specific claim. After all, even if digital sentience is perfectly possible, it by no means follows that future sentient beings will necessarily converge toward being digital.

In other words, the digital dominance position makes strong assumptions about the most prevalent forms of sentient computation in the future, and it seems that there is a fairly large space of possibilities that does not imply digital dominance, such as (a future predominance of) non-digital neuron-based computers, non-digital neuron-inspired computers, and various kinds of quantum computers that have yet to be invented.

When one takes these arguments into account, it at least seems quite uncertain whether digital sentience dominates in expectation, even if we grant that artificial sentience does.

Reasons to doubt that “traditional” biological sentience dominates in expectation

A reason to doubt that “traditional” sentience dominates is that, whatever one’s theory of sentience, it seems likely that sentience can be created artificially — i.e. in a way that we would deem artificial. (An example might be further developed and engineered versions of brain organoids.) Specifically, regardless of which physical processes or mechanisms we take to be critical to sentience, those processes or mechanisms can most likely be replicated in other systems than just live biological animals as we know them.

If we combine this premise with an assumption of continued technological evolution (which likely holds true in the future scenarios that contain the largest numbers of sentient beings), it overall seems doubtful that the majority of future beings will, in expectation, be “traditional” biological organisms — especially when we consider the prospect of large futures that involve space colonization.

More broadly, we have reason to doubt the “traditional” biological dominance position for the same reason that we have reason to doubt the digital dominance position, namely that the position entails a rather strong and specific claim along the lines that: “this particular class of sentient being is most numerous in expectation”. And, as in the case of digital dominance, it seems that there are many plausible ways in which this could turn out to be wrong, such as due to neuron-inspired or other yet-to-be-invented artificial systems that could become both sentient and prevalent.

Why does this matter?

Whether artificial sentience dominates in expectation plausibly matters for our priorities (though it is unclear how much exactly, since some of our most robust strategies for reducing suffering are probably worth pursuing in roughly the same form regardless). Yet those who take artificial sentience seriously might adopt suboptimal priorities and communication strategies if they primarily focus on digital sentience in particular.

At the level of priorities, they might restrict their focus to an overly narrow set of potentially sentient systems, and perhaps neglect the great majority of future suffering as a result. At the level of communication, they might needlessly hamper their efforts to raise concern for artificial sentience by mostly framing the issue in terms of digital sentience. This framing might lead people who are skeptical of digital sentience to mistakenly dismiss the broader issue of artificial sentience.

Similar points apply to those who believe that “traditional” biological sentience dominates in expectation: they, too, might restrict their focus to an overly narrow set of systems, and thereby neglect to consider a wide range of scenarios that may intuitively seem like science fiction, yet which nevertheless deserve serious consideration on reflection (e.g. scenarios that involve a large-scale spread of suffering due to space colonization).

In summary, there are reasons to doubt both the digital dominance position and the “traditional” biological dominance position. Moreover, it seems that there is something to be gained by not using the narrow term “digital sentience” to refer to the broader category of “artificial sentience”, and by being clear about just how much broader this latter category is.

Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


Contents

  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part; we find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

Some pitfalls of utilitarianism

My aim in this post is to highlight and discuss what I consider to be some potential pitfalls of utilitarianism. These are not necessarily pitfalls that undermine utilitarianism at a theoretical level (although some of them might also pose a serious challenge at that level). As I see them, they are more pitfalls at the practical level, relating to how utilitarianism is sometimes talked about, thought about, and acted on in ways that may be suboptimal by the standards of utilitarianism itself.

I should note from the outset that this post is not inspired by recent events involving dishonest and ruinous behavior by utilitarian actors; I have been planning to write this post for a long time. But recent events arguably serve to highlight the importance of some of the points I raise below.


Contents

  1. Restrictive formalisms and “formalism first”
  2. Risky and harmful decision procedures
    1. Allowing speculative expected value calculations to determine our actions
    2. Underestimating the importance of emotions, virtues, and other traits of moral actors
    3. Uncertainty-induced moral permissiveness
    4. Uncertainty-induced lack of moral drive
    5. A more plausible approach
  3. The link between utilitarian judgments and Dark Triad traits: A cause for reflection
  4. Acknowledgments

Restrictive formalisms and “formalism first”

A potential pitfall of utilitarianism, in terms of how it is commonly approached, is that it can make us quick to embrace certain formalisms and conclusions, as though we have to accept them on pain of mathematical inconsistency.

Consider the following example: Alice is a utilitarian who thinks that a certain mildly enjoyable experience, x, has positive value. On Alice’s view, it is clear that no number of instances of x would be worse than a state of extreme suffering, since a state of extreme suffering and a mildly enjoyable experience are completely different categories of experience. Over time, Alice reads about different views of wellbeing and axiology, and she eventually changes her position such that she finds it more plausible that no experiential states are above a neutral state, and that no states have intrinsic positive value (i.e. she comes to embrace a minimalist axiology).

Alice thus no longer considers it plausible to assign positive value to experience x, and instead now assigns mildly negative value to the experience (e.g. because the experience is not entirely flawless; it contains some bothersome disturbances). Having changed her mind about the value of experience x, Alice now feels mathematically compelled to say that sufficiently many instances of that experience are worse than any experience of extreme suffering, even though she finds this implausible on its face — she still thinks state x and states of extreme suffering belong to wholly different categories of experience.

To be clear, the point I am trying to make here is not that the final conclusion that Alice draws is implausible. My point is rather that certain prevalent ways of formalizing value can make people feel needlessly compelled to draw particular conclusions, as though there are no coherent alternatives, when in fact there are. More generally, there may be a tendency to “put formalism first”, as it were, rather than to consider substantive plausibility first, and to then identify a coherent formalism that fits our views of substantive plausibility.

Note that the pitfall I am gesturing at here is not one that is strictly implied by utilitarianism, as one can be a utilitarian yet still reject standard formalizations of utilitarianism. But being bound to a restrictive formalization scheme nevertheless seems common, in my experience, among those who endorse or sympathize with utilitarianism.

Risky and harmful decision procedures

A standard distinction in consequentialist moral theory is that between ‘consequentialist criteria of rightness’ and ‘consequentialist decision procedures’. One might endorse a consequentialist criterion of rightness — meaning that consequences determine whether a given action is right or wrong — without necessarily endorsing consequentialist decision procedures, i.e. decision procedures in which one decides how to act based on case-by-case calculations of the expected outcomes.

Yet while this distinction is often emphasized, it still seems that utilitarianism is prone to inspire suboptimal decision procedures, also by its own standards (as a criterion of rightness). The following are a few of the ways in which utilitarianism can inspire suboptimal decision procedures, attitudes, and actions by its own standards.

Allowing speculative expected value calculations to determine our actions

A particular pitfall is to let our actions be strongly determined by speculative expected value calculations. There are various reasons why this may be suboptimal by utilitarian standards, but an important one is simply that the probabilities that go into such calculations are likely to be inaccurate. If our probability estimates on a given matter are highly uncertain and likely to change a lot as we learn more, there is a large risk that it is suboptimal to make any strong bets on our current estimates.

The robustness of a given probability estimate is thus a key factor to consider when deciding whether to act on that estimate, yet it can be easy to neglect this factor in real-world decisions.

Underestimating the importance of emotions, virtues, and other traits of moral actors

A related pitfall is to underestimate the significance of emotions, attitudes, and virtues. Specifically, if we place a strong emphasis on the consequences of actions, we might in turn be inclined to underemphasize the traits and dispositions of the moral actors themselves. Yet the traits and dispositions of moral actors are often critical to emphasize and to actively develop if we are to create better outcomes. Our cerebral faculties and our intuitive attitudinal faculties can both be seen as tools that enable us to navigate the world, and the latter are often more helpful for creating desired outcomes than the former (cf. Gigerenzer, 2001).

A specific context in which I and others have tried to argue for the importance of underlying attitudes and traits, in contrast to mere cerebral beliefs, is when it comes to animal ethics. In particular, engaging in practices that are transparently harmful and exploitative toward non-human beings is harmful not only in terms of how it directly contributes to those specific exploitative practices, but also in terms of how it shapes our emotions, attitudes, and traits — and thus ultimately our behavior.

More generally, to emphasize outcomes while placing relatively little emphasis on the traits of humans, as moral actors, seems to overlook the largely habitual and disposition-based nature of human behavior. After all, our emotions and attitudes not only play important roles in our individual motivations and actions, but also in the social incentives that influence the behavior of others (cf. Haidt, 2001).

In short, if one embraces a consequentialist criterion of rightness, it seems that there are good reasons to cultivate the temperament of a virtue ethicist and the felt attitudes of a non-consequentialist who finds certain actions unacceptable in practically all situations.

Uncertainty-induced moral permissiveness

Another pitfall is to practically surrender one’s capacity for moral judgment due to uncertainty about long-term outcomes. In its most extreme manifestations, this might amount to declaring that we do not know whether people who committed large-scale atrocities in the past acted wrongly, since we do not know the ultimate consequences of those actions. But perhaps a more typical manifestation is to fail to judge, let alone oppose, ongoing harmful actions and intolerant values (e.g. clear cases of discrimination), again with reference to uncertainty about the long-term consequences of those actions and values.

This pitfall relates to the point about dispositions and attitudes made above, in that the disposition to be willing to judge and oppose harmful actions and views plausibly has better overall consequences than a disposition to be meek and unwilling to take a strong stance against such things.

After all, while there is significant uncertainty about the long-term future, one can still make reasonable inferences about which broad directions we should ideally steer our civilization toward over the long term (e.g. toward showing concern for suffering in prudent yet morally serious ways). Utilitarians have reason to help steer the future in those directions, and to develop traits and attitudes that are commensurate with such directional changes. (See also “Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies”.)

Uncertainty-induced lack of moral drive

A related pitfall is uncertainty-induced lack of moral drive, whereby empirical uncertainty serves as a stumbling block to dedicated efforts to help others. This is probably also starkly suboptimal, for reasons similar to those outlined above: all things considered, it is likely ideal to develop a burning drive to help other sentient beings, despite uncertainty about long-term outcomes.

Perhaps the main difficulty in this respect is to know which particular project or aim is most important to work on. Yet a potential remedy to this problem (here conveyed in a short and crude fashion) might be to first make a dedicated effort toward the concrete goal of figuring out which projects or aims seem most worth pursuing — i.e. a broad and systematic search, informed by copious reading. And when one has eventually identified an aim or project that seems promising, it might be helpful to somewhat relax the “doubting modules” of our minds and to stick to that project for a while, pursuing the chosen aim with dedication (unless something clearly better comes up).

A more plausible approach

The previous sections have mostly pointed to suboptimal ways to approach utilitarian decision procedures. In this section, I want to briefly outline what I would consider a more defensible way to approach decision-making from a utilitarian perspective (whether one is a pure utilitarian or whether one merely includes a utilitarian component in one’s moral view).

I think two key facts must inform any plausible approach to utilitarian decision procedures:

  1. We have massive empirical uncertainty.
  2. We humans have a strong proclivity to deceive ourselves in self-serving ways.

These two observations carry significant implications. In short, they suggest that we should generally approach moral decisions with considerable humility, and with a strong sense of skepticism toward conclusions that are conveniently self-serving or low on integrity.

Given our massive uncertainty and our endlessly rationalizing minds, the ideal approach to utilitarian decision procedures is probably one that has a rather large distance between the initial question of “how to act” and the final decision to pursue a given action (at least when one is trying to calculate one’s way to an optimal decision). And this distance should probably be especially large if the decision that at first seems most recommendable is one that other moral views, along with common-sense intuitions, would deem profoundly wrong.

In other words, it seems that utilitarian decision procedures are best approached by assigning a fairly high prior to the judgments of other ethical views and common-sense moral intuitions (in terms of how plausible those judgments are from a utilitarian perspective), at least when these other views and intuitions converge strongly on a given conclusion. And it seems warranted to then be quite cautious and slow to update away from that prior, in part because of our massive uncertainty and our self-deceived minds. This is not to say that one could not end up with significant divergences relative to other widely endorsed moral views, but merely that such strong divergences probably need to be supported by a level of evidence that exceeds a rather high bar.

Likewise, it seems worth approaching utilitarian decision procedures with a prior that strongly favors actions of high integrity, not least because we should expect our rationalizing minds to be heavily biased toward low integrity — especially when nobody is looking.

Put briefly, it seems that a more defensible approach to utilitarian decision procedures would be animated by significant humility and would embody a strong inclination toward key virtues of integrity, kindness, honesty, etc., partly due to our strong tendency to excuse and rationalize deficiencies in these regards.

There are many studies that find a modest but significant association between proto-utilitarian judgments and the personality traits of psychopathy (impaired empathy) and Machiavellianism (manipulativeness and deceitfulness). (See Bartels & Pizarro, 2011; Koenigs et al., 2012; Gao & Tang, 2013; Djeriouat & Trémolière, 2014; Amiri & Behnezhad, 2017; Balash & Falkenbach, 2018; Karandikar et al., 2019; Halm & Möhring, 2019; Dinić et al., 2020; Bolelli, 2021; Luke & Gawronski, 2021; Schönegger, 2022.)

Specifically, the aspect of utilitarian judgment that seems most associated with psychopathy is the willingness to commit harm for the sake of the greater good, whereas endorsement of impartial beneficence — a core feature of utilitarianism and many other moral views — is associated with empathic concern, and is thus negatively associated with psychopathy (Kahane et al., 2018; Paruzel-Czachura & Farny, 2022). Another study likewise found that the connection between psychopathy and utilitarian moral judgments is in part explained by a reduced aversion to carrying out harmful acts (Patil, 2015).

Of course, whether a particular moral view, or a given feature of a moral view, is associated with certain undesirable personality traits by no means refutes that moral view. But the findings reviewed above might still be a cause for self-reflection among those of us who endorse or sympathize with some form of utilitarianism.

For example, maybe utilitarians are generally inclined to have fewer moral inhibitions compared to most people — e.g. because utilitarian reasoning might override intuitive judgments and norms, or because utilitarians are (perhaps) above average in trait Machiavellianism, in which case they might have fewer strongly felt moral inhibitions to overcome in the first place. And if utilitarians do tend to have fewer or weaker moral restraints of certain kinds, this could in turn dispose them to be less ethical in some respects, also by their own standards.

To be clear, this is all somewhat speculative. Yet, at the same time, these speculations are not wholly unmotivated. In terms of potential upshots, it seems that a utilitarian proneness to reduced moral restraint, if real, would give utilitarian actors additional reason to be skeptical of inclinations to disregard common moral inhibitions against harmful acts and low-integrity behavior. In short, it would give utilitarians even more reason to err on the side of integrity.

Acknowledgments

For helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

Reasons to include insects in animal advocacy

I have seen some people claim that animal activists should primarily be concerned with certain groups of numerous vertebrates, such as chickens and fish, whereas we should not be concerned much, if at all, with insects and other small invertebrates. (See e.g. here.) I think there are indeed good arguments in favor of emphasizing chickens and fish in animal advocacy, yet I think those same arguments tend to support a strong emphasis on helping insects as well. My aim in this post is to argue that we have compelling reasons to include insects and other small vertebrates in animal advocacy.


Contents

  1. A simplistic sequence argument: Smaller beings in increasingly large numbers
    1. The sequence
    2. Why stop at chickens or fish?
  2. Invertebrate vs. vertebrate nervous systems
    1. Phylogenetic distance
    2. Behavioral and neurological evidence
    3. Nematodes and extended sequences
  3. Objection based on appalling treatment
  4. Potential biases
    1. Inconvenience bias
    2. Smallness bias
    3. Disgust and fear reflexes
    4. Momentum/status quo bias
  5. Other reasons to focus more on small invertebrates
    1. Neglectedness
    2. Opening people’s eyes to the extent of suffering and harmful decisions
    3. Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion
    4. Qualifications and counter-considerations
  6. My own view on strategy in brief
  7. Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering
  8. Acknowledgments

A simplistic sequence argument: Smaller beings in increasingly large numbers

As a preliminary motivation for the discussion, it may be helpful to consider the sequence below.

I should first of all clarify what I am not claiming in light of the following sequence. I am not making any claims about the moral relevance of the neuron counts of individual beings or groups of beings (that is a complicated issue that defies simple answers). Nor am I claiming that we should focus mostly on helping beings such as land arthropods and nematodes. The claim I want to advance is a much weaker one, namely that, in light of the sequence below, it is hardly obvious that we should focus mostly on helping chickens or fish.

The sequence

At any given time, there are roughly:

  • 780 million farmed pigs, with an estimated average neuron count of 2.2 billion. Total neuron count: ~1.7 * 10^18.
  • 33 billion farmed chickens, with an estimated average neuron count of 200 million. Total neuron count: ~6.6 * 10^18.
  • 10^15 fish (the vast majority of whom are wild fish), with an estimated average neuron count of 1 million neurons (this number lies between the estimated neuron count of a larval zebrafish and an adult zebrafish; note that there is great uncertainty in all these estimates). Total neuron count: ~10^21. It is estimated that humanity kills more than a trillion fish a year, and if we assume that they likewise have an average neuron count of around 1 million neurons, the total neuron count of these beings is ~10^18.
  • 10^19 land arthropods, with an estimated average neuron count of 15,000 neurons (some insects have brains with more than a million neurons, but most arthropods appear to have considerably fewer neurons). Total neuron count: ~1.5*10^23. If humanity kills roughly the same proportion of land arthropods as the proportion of fish that we kill (e.g. through insecticides and insect farming), then the total neuron count of the land arthropods we kill is ~10^20.
  • 10^21 nematodes, with an estimated average neuron count of 300 neurons. Total neuron count: ~3 * 10^23.

Why stop at chickens or fish?

The main argument that supports a strong emphasis on chickens or fish is presumably their large numbers (as well as their poor treatment, which I discuss below). Yet the numbers-based argument that supports a strong emphasis on chickens and fish could potentially also support a strong emphasis on small invertebrates such as insects. It is thus not clear why we should place a strict boundary right below chickens or fish beyond which this numbers-based argument no longer applies. After all, each step of this sequence entails a similar pattern in terms of crude numbers: we have individual beings who on average have 1-3 orders of magnitude fewer neurons yet who are 1-5 orders of magnitude more numerous than the beings in the previous step.

Invertebrate vs. vertebrate nervous systems

A defense that one could give in favor of placing a relatively strict boundary below fish is that we here go from vertebrates to invertebrates, and we can be significantly less sure that invertebrates suffer compared to vertebrates.

Perhaps this defense has some force. But how much? Our confidence that the beings in this sequence have the capacity to suffer should arguably decrease at least somewhat in each successive step, yet should the decrease in confidence from fish to insects really be that much bigger than in the previous steps?

Phylogenetic distance

Based on the knowledge that we ourselves can suffer, one might think that a group of beings’ phylogenetic distance from us (i.e. how distantly related they are to us) can provide a tentative prior as to whether those beings can suffer, and regarding how big a jump in confidence we should make for different kinds of beings. Yet phylogenetic distance per se arguably does not support a substantially greater decrease in confidence in the step from fish to insects compared to the previous steps in the sequence above. 

The last common ancestor of humans and insects appears to have lived around 575 million years ago, whereas the last common ancestor of humans and fish lived around 400-485 million years ago (depending on the species of fish; around 420-460 million years for the most numerous fish). By comparison, the last common ancestor of humans and chickens lived around 300 million years ago, while the last common ancestor of humans and pigs lived around 100-125 million years ago.

Thus, when we look at different beings’ phylogenetic distance from humans in these temporal terms, it does not seem that the step between fish and insects (in the sequence above) is much larger than the step between fish and chickens or between chickens and pigs. In each case, the increase in the “distance” appears to be something like 100-200 million years.

Behavioral and neurological evidence

Of course, “phylogenetic distance from humans” does not represent strong evidence as to whether a group of beings has the capacity to suffer. After all, humans are more closely related to starfish (~100 neurons) than to octopuses (~500 million neurons), and we have much stronger reasons to think that the latter can suffer, based on behavioral and neurological evidence (cf. the Cambridge Declaration on Consciousness).

Does such behavioral and neurological evidence support a uniquely sharp drop in confidence regarding insect sentience compared to fish sentience? Arguably not, as there is mounting evidence of pain in (small) invertebrates, both in terms of behavioral and neuroscientific evidence. Additionally, there are various commonalities in the respective structures and developments of arthropod and vertebrate brains.

In light of this evidence, it seems that a sharp drop in confidence regarding pain in insects (versus pain in fish) requires a justification.

Nematodes and extended sequences

I believe that a stronger decrease in confidence is warranted when comparing arthropods and nematodes, for a variety of reasons: the nematode nervous system consists primarily of a so-called nerve ring, which is quite distinct from the brains of arthropods, and unlike the neurons of arthropods (and other animals), nematode neurons do not have action potentials or orthologs of sodium-channels (e.g. Nav1 and Nav2), which appear to play critical roles to pain signaling in other animals.

However, the evidence of pain in nematodes should not be understated either. The probability of pain in nematodes still seems non-negligible, and it arguably justifies substantial concern for (the risk of) nematode pain, even if it does not overall warrant a similarly strong concern and priority as does the suffering of chickens, fish, and arthropods.

This discussion also hints at why the sequence argument above need not imply that we should primarily focus on risks of suffering in bacteria or atoms, as one may reasonably hold that the probability of such suffering decreases by a greater rate than the number of the purported sufferers increases in such extended sequences.

Objection based on appalling treatment

Another reason one could give in favor of focusing on chickens and fish is that they are treated in particularly appalling ways, e.g. they are often crammed in extremely small spaces and killed in horrific ways. I agree that humanity’s abhorrent treatment of chickens and fish is a strong additional reason to prioritize helping them. Yet it seems that this same argument also favors a focus on insects.

After all, humanity poisons vast numbers of insects with insecticides that may cause intensely painful deaths, and in various insect farming practices — which are sadly growing — insects are commonly boiled, fried, or roasted alive. These practices seem no less cruel and appalling than the ways in which we treat and kill chickens and fish.

Potential biases

There are many reasons to expect that we are biased against giving adequate moral consideration to small invertebrates such as insects (in addition to our general speciesist bias). The four plausible biases listed below are by no means exhaustive.

Inconvenience bias

It is highly inconvenient if insects can feel pain, as it would imply that 1) we should be concerned about far more beings, which greatly complicates our ethical and strategic considerations (compared to if we just focused on vertebrates); 2) the extent of pain and suffering in the world is far greater than we would otherwise have thought, which may be a painful conclusion to accept; and 3) we should take far greater care not to harm insects in our everyday lives. All these inconveniences likely motivate us to conclude that insects are not sentient or that they are not that important in the bigger picture.

Smallness bias

Insects tend to be rather small, even compared to fish, which might make us reluctant to grant them moral consideration. In other words, our intuitions plausibly display a general sizeist bias. As a case in point, ants have more than twice as many neurons as lobsters, and there does not seem to be any clear reason to think that ants are less able to feel pain than are lobsters. Yet ants are obviously much smaller than lobsters, which may explain why people seem to show considerably more concern for lobsters than for ants, and why the number of people who believe that lobsters can feel pain (more than 80 percent in a UK survey) is significantly larger than the number of people who believe that ants can feel pain (around 56 percent). Of course, this pattern may also be partially explained by the inconvenience bias, since the acceptance of pain in lobsters seems less inconvenient than does the acceptance of pain in ants; but size likely still plays a significant role. (See also Vinding, 2015, “A Short Note on Insects”.)

Disgust and fear reflexes

It seems that many people have strong disgust reactions to (at least many) small invertebrates, such as cockroaches, maggots, and spiders. Some people may also feel fear toward these animals, or at least feel that they are nuisance. Gut reactions of this kind may well influence our moral evaluations of small invertebrates in general, even though they ideally should not.

Momentum/status quo bias

The animal movement has not historically focused on invertebrates, and hence there is little momentum in favor of focusing on their plight. That is, our status quo bias seems to favor a focus on helping the vertebrates whom the animal movement have traditionally focused on. To be sure, status quo bias also works against concern for fish and chickens to some degree (which is worth controlling for as well), yet chickens and fish have still received considerably more focus from the animal movement, and hence status quo bias likely negates concern for insects to an even stronger extent.

These biases should give us pause when we are tempted to reflexively dismiss the suffering of small invertebrates.

Other reasons to focus more on small invertebrates

In addition to the large number of arthropods and the evidence for arthropod pain, what other reasons might support a greater focus on small invertebrates?

Neglectedness

An obvious reason is the neglect of these beings. As hinted in the previous section, a focus on helping small invertebrates has little historical momentum, and it is still extremely neglected in the broader animal movement today. This seems to me a fairly strong reason to focus more on invertebrates on the margin, or at the very least to firmly include invertebrates in one’s advocacy.

Opening people’s eyes to the extent of suffering and harmful decisions

Another, perhaps less obvious reason is that concern for smaller beings such as insects might help reduce risks of astronomical suffering. This claim should immediately raise some concerns about suspicious convergence, and as I have argued elsewhere, there is indeed a real risk that expanding the moral circle could increase rather than reduce future suffering. Partly for this reason, it might be better to promote a deeper concern for suffering than to promote wider moral circles (see also Vinding, 2020, ch. 12).

Yet that being said, I also think there is a sense in which wider moral circles can help promote a deeper concern for suffering, and not least give people a more realistic picture of the extent of suffering in the world. Simply put, a moral outlook that includes other vertebrates besides humans will see far more severe suffering and struggle in the world, and a perspective that also includes invertebrates will see even more suffering still. Indeed, not only does such an outlook open one’s eyes to more existing suffering, but it may also open one’s eyes (more fully) to humanity’s capacity to ignore suffering and to make decisions that actively increase it, even today.

Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion

Another way in which greater concern for invertebrate suffering might reduce risks of astronomical suffering is that small invertebrates seem to be among the animals who are most likely to be sent into space on a large scale in the future (e.g. because they may survive better in extreme environments). Indeed some invertebrates — including fruit flies, crickets, and wasps — have already been sent into space, and some tardigrades were even sent to the moon (though the spacecraft crashed and probably none survived). Hence, the risk of spreading animals to space plausibly gives us additional reason to include insects in animal advocacy.

Qualifications and counter-considerations

To be clear, the considerations reviewed above merely push toward increasing the emphasis that we place on small beings such as insects — they are not necessarily decisive reasons to give primary focus to those beings. In particular, these arguments do not make a case for focusing on helping insects over, say, new kinds of beings who might be created in the future in even larger numbers.

It is also worth noting that there may be countervailing reasons not to emphasize insects more. One is that it could risk turning people away from the plight of non-human animals and the horror of suffering, which many people might find difficult to relate to if insect suffering constitutes the main focus at a practical level. This may be a reason to favor a greater focus on the suffering of larger and (for most people) more relatable animals.

I think the considerations on both sides need to be taken into account, including considerations about future beings who may become even more numerous and more neglected than insects. The upshot, to my mind, is that while focusing primarily on helping insects is probably not the best way to reduce suffering (for most of us), it still seems likely that 1) promoting greater concern for insects, as well as 2) promoting concrete policies that help insects, both constitute a significant part of the optimal portfolio of aims to push for.

My own view on strategy in brief

While questions about which beings seem most worth helping (on the margin) can be highly relevant for many of our decisions, there are also many strategic decisions that do not depend critically on how we answer these questions.

Indeed, my own view on strategies for reducing animal suffering is that we generally do best by pursuing robust and broad strategies that help many beings simultaneously, without focusing too narrowly on any single group of beings. (Though as hinted above, I think there are many situations where it makes sense to focus on interventions that help specific groups of beings.)

This is one of the reasons why I tend to favor an antispeciesist approach to animal advocacy, with a particular emphasis on the importance of suffering. Such an approach is still compatible with highlighting the scale and neglectedness of the suffering of chickens, fish, and insects, as well as the scale and neglectedness of wild-animal suffering. That is, a general approach thoroughly “scope-informed” about the realities on the ground.

And such a comprehensive approach seems further supported when we consider risks of astronomical suffering (despite the potential drawbacks alluded to earlier). In particular, when trying to help other animals today, it is worth asking how our efforts might be able to help future beings as well, since failing to do so could be a lost opportunity to spare large numbers of beings from suffering. (For elaboration, see “How the animal movement could do even more good” and Vinding, 2022, sec. 10.8-10.9.)

Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering

An objection against numbers-based arguments for focusing more on insects is that small pains, or a high probability of small pains, cannot be aggregated to be worse than extreme suffering.

I agree with the view that small pains do not add up to be worse than extreme suffering, yet I think it is mistaken to think that this view undermines any numbers-based argument for emphasizing insects more in animal advocacy. The reason, in short, is that we should also assign some non-negligible probability to the possibility that insects experience extreme suffering (e.g. in light of the evidence for pain in insects cited above). And this probability, combined with the very large number of insects, implies that there are many instances of extreme suffering occurring among insects in expectation. After all, the vast number of insects should lead us to believe that there are many beings who have experiences at the (expected) tail-end of the very worst experiences that insects can have.

As a concluding thought experiment that may challenge comfortable notions regarding the impossibility of intense pain among insects, consider that you were given the choice between A) living as a chicken inside a tiny battery cage for a full day, or B) being continually born and reborn as an insect who has the experience of being burned or crushed alive, for a total of a million days (for concreteness, you may imagine that you will be reborn as a butterfly like the one pictured at the top of this post).

If we were really given this choice, I doubt that we would consider it an easy choice in favor of B. I doubt that we would dismiss the seriousness of the worst insect suffering.

Acknowledgments

For their helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

The dismal dismissal of suffering-focused views

Ethical views that give a foremost priority to the reduction of suffering are often dismissed out of hand. More than that, it is quite common to see such views discussed in highly uncharitable ways, and to even see them described with pejorative terms.

My aim in this post is to call attention to this phenomenon, as I believe it can distort public discourse and individual thinking about the issue. That is, if certain influential people consistently dismiss certain views without proper argumentation, and in some cases even use disparaging terms to describe such views, then this is likely to bias people’s evaluations of these views. After all, most people will likely feel some social pressure not to endorse views that their intellectual peers call “crazy” or “monstrously toxic”. (See also what Simon Knutsson writes about social mechanisms that may suppress talk about, and endorsements of, suffering-focused views.)

Many of the examples I present below are not necessarily that significant on their own, but I think the general pattern that I describe is quite problematic. Some of the examples involve derogatory descriptions, while others involve strawman arguments and uncharitable rejections of suffering-focused views that fail to engage with the most basic arguments in favor of such views.

My overall recommendation is simply to meet suffering-focused views with charitable arguments rather than with strawman argumentation or insults — i.e. to live up to the standards that are commonly accepted in other realms of intellectual discourse.


Contents

  1. “Crazy” and “transparently silly” views
  2. Lazari-Radek and Singer’s cursory rejection
  3. “Arguably too nihilistic and divorced from humane values to be worth taking seriously”
  4. “Anti-natalism is neurotic self-hatred”
  5. More examples
  6. Conclusion

“Crazy” and “transparently silly” views

In his essay “Why I’m Not a Negative Utilitarian” (2013), Toby Ord writes that “you would have to be crazy” to choose a world with beings who experience unproblematic states over a world with beings who experience pure happiness (strict negative utilitarianism would be indifferent between the two, and according to some versions of negative utilitarianism, unproblematic mental states and pure happiness are the same thing, cf. Sherman, 2017; Knutsson, 2022).

Ord also writes that the view that happiness does not contribute to a person’s wellbeing independently of its effects on reducing problematic states is a “crazy view”, without engaging with any of the arguments that have been made in favor of the class of views that he is thereby dismissing — i.e. views according to which wellbeing consists in the absence of problematic states or frustrated desires (see e.g. Schopenhauer, 18191851; Fehige, 1998; O’Keefe, 2009, ch. 12).

These may not seem like particularly problematic claims, yet I believe that Ord would consider it poor form if similar claims were made about his preferred view — for example, if someone claimed that “you would have to be crazy to choose to create arbitrarily large amounts of extreme suffering in order to create a ‘sufficient’ amount of pleasure” (cf. the Very Repugnant Conclusion; Creating Hell to Please the Blissful; and Intense Bliss with Hellish Cessation). 

Similarly, Rob Bensinger writes that negative utilitarianism is “transparently false/silly”. Bensinger provides a brief justification for his claim that I myself and others find unconvincing, and it is in any case not a justification that warrants calling negative utilitarianism “transparently false/silly”.

Lazari-Radek and Singer’s cursory rejection

In their book The Point of View of the Universe, Lazari-Radek and Singer seek to defend the classical utilitarian view of Henry Sigdwick. It would be natural, in this context, to provide an elaborate discussion of the moral symmetry between happiness and suffering that is entailed by classical utilitarianism — after all, such a moral symmetry has been rejected by various philosophers in a variety of ways, and it is arguably one of the most controversial features of classical utilitarianism (cf. Mayerfeld, 1996, p. 335).

Yet Lazari-Radek and Singer barely broach the issue at all. The only thing that comes close is a single page worth of commentary on the views of David Benatar, which unfortunately amounts to a misrepresentation of Benatar’s views. Lazari-Radek and Singer claim that Benatar argues that “to have a desire for something is to be in a negative state” (p. 362). To my knowledge, this is not a claim that Benatar defends, and the claim is at any rate not critical to the main procreative asymmetry that he argues for (Benatar, 2006, ch. 2).

Lazari-Radek and Singer briefly rebut the claim about desires that they (I suspect wrongly) attribute to Benatar, by which they fail to address Benatar’s core views in any meaningful way. They then proceed to write the following, which as far as I can tell is the closest they get to a defense of a moral symmetry between happiness and suffering in their entire book: “for people who are able to satisfy the basic necessities of life and who are not suffering from depression or chronic pain, life can reasonably be judged positively” (pp. 362-363).

This is, of course, not much of a defense of a moral symmetry. First of all, no arguments are provided in defense of the claim that such lives “can reasonably be judged positively” (a claim that one can reasonably dispute). Second, even if we grant that certain lives “can be judged positively” (in terms of the intrinsic value of their contents), it still does not follow that such lives that are “judged positively” can also morally outweigh the most horrific lives. This is an all-important issue for the classical utilitarian to address, and yet Lazari-Radek and Singer proceed as though their claim that “life can reasonably be judged positively” also applies to the world as a whole, even when we factor in all of its most horrific lives. Put briefly, Lazari-Radek and Singer’s cursory rejection of asymmetric and suffering-focused views is highly unsatisfactory.

(In a vein similar to the dismissive remarks covered in the previous section, Lazari-Radek and Singer also later write that “any sane person will agree” that a scenario in which 100 percent of humanity dies is worse than a scenario in which 99 percent of humanity dies, cf. p. 375. Regardless of the plausibility of that claim — which one might agree with even from a purely suffering-focused perspective — it is bad form to imply that people are not sane if they disagree with it, not least since the latter scenario could well involve far more suffering overall. Likewise, in a response to a question on Reddit, Singer dismisses negative utilitarianism as “hopeless” without providing any reasons as to why.)

“Arguably too nihilistic and divorced from humane values to be worth taking seriously”

The website utilitarianism.net is co-authored by William MacAskill, Richard Yetter Chappell, and Darius Meissner. The aim of the website is to provide “a textbook introduction to utilitarianism at the undergraduate level”, and it is endorsed by Peter Singer (among others), who blurbs it as “the place to go for clear, full and fair accounts of what utilitarianism is, the arguments for it, the main objections to it, special issues like population ethics, and what living as a utilitarian involves.”

Yet the discussion found on the website is sorely lacking when it comes to fundamental questions and objections concerning the relative importance of suffering versus happiness. In particular, like Lazari-Radek and Singer’s Point of View of the Universe, the website contains no discussion of the moral symmetry between suffering and happiness that is entailed by classical utilitarianism, despite it being among the most disputed features of that view (see e.g. Popper, 1945; Mayerfeld, 19961999; Wolf, 199619972004; O’Keefe, 2009; Knutsson, 2016; Mathison, 2018; Vinding, 2020).

Similarly, the discussion of population ethics found on the website is extremely one-sided and uncharitable in its discussion of suffering-focused and asymmetric views in population ethics, especially for a text that is supposed to serve as an introductory textbook.

For instance, they write the following in a critique of the Asymmetry in population ethics (the Asymmetry is roughly the idea that it is bad to bring miserable lives into the world but not good to bring happy lives into the world):

But this brings us to a deeper problem with the procreative asymmetry, which is that it has trouble accounting for the idea that we should be positively glad that the world (with all its worthwhile lives) exists

There is much to take issue with in this sentence. First, it presents the idea that “we should be positively glad that the world exists” as though it is an obvious and supremely plausible idea; yet it is by no means obvious, and it has been questioned by many philosophers. A truly “full and fair” introductory textbook would have included references to such counter-perspectives. Indeed, the authors of utilitarianism.net call it a “perverse conclusion” that an empty world would be better than a populated one, without mentioning any of the sources that have defended that “perverse conclusion”, and without engaging with the arguments that have been made in its favor (e.g. Schopenhauer, 18191851; Benatar, 19972006; Fehige, 1998; Breyer, 2015; Gloor, 2017; St. Jules, 2019; Frick, 2020; Ajantaival, 2021/2022). Again, this falls short of what one would expect from a “full and fair” introductory textbook.

Second, the quote above may be critiqued for bringing in confounding intuitions, such as intuitions about the value of the world as a whole, which is in many ways a different issue from the question of whether it can be good to add new beings to the world for the sake of these beings themselves.

Third, the notion of “worthwhile lives” is not necessarily inconsistent with a procreative asymmetry, since lives may be deemed worthwhile in the sense that their continuation is preferable even if their creation is not (cf. Benatar, 19972006; Fehige, 1998; St. Jules, 2019; Frick, 2020). Additionally, one can think that a life is worthwhile — both in terms of its continuation and creation — because it has beneficial effects for others, even if it can never be better for the created individual themself that they come into existence.

The authors go on to write:

when thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

This quote effectively dismisses all of the views cited above — the views of Schopenhauer, Fehige, Benatar, and Frick, as well as the Nirodha View in the Pali Buddhist tradition — in one fell swoop by claiming that they are “arguably too nihilistic and divorced from humane values to be worth taking seriously”. That is, to put it briefly, a lazy treatment that again falls short of the minimal standards of a fair introductory textbook.

After all, classical utilitarians would probably also object if a textbook introduction were to effectively dismiss classical utilitarianism (and similar views) with the one-line claim that “views that allow the creation of lives full of extreme suffering in order to create pleasure for others are arguably too divorced from humane values to be worth taking seriously.” Yet the dismissal is just as unhelpful and uncharitable when made in the other direction. 

Finally, the authors also omit any mention of the Very Repugnant Conclusion, although one of the co-authors, William MacAskill, has stated that he considers it the strongest objection against his favored version of utilitarianism. It is arguably bad form to omit any discussion — or even a mention — of what one considers the strongest objection against one’s favored view, especially if one is trying to write a fair and balanced introductory textbook that features that view prominently.

“Anti-natalism is neurotic self-hatred”

Psychologist Geoffrey Miller has given several talks about effective altruism, including one at EA Global, and he has also taught a full university course on the psychology of effective altruism. At the time of writing, Miller has more than 120,000 followers on Twitter, which makes him one of the most widely followed people associated with effective altruism, with more followers than Peter Singer.

Having such a large audience arguably raises one’s responsibility to communicate in an intellectually honest and charitable manner. Yet Miller has repeatedly misrepresented the views of David Benatar and written highly uncharitable statements about antinatalism and negative utilitarianism, without seriously engaging with the arguments made in favor of these views.

For example, Miller has written on Twitter that “anti-natalism is neurotic self-hatred”, and he has on several occasions falsely implied that David Benatar is a negative utilitarian, such as when he writes that “[Benatar’s] negative utilitarianism assumes that only suffering counts, & pleasure can never offset it”; or when he writes that “Benatar’s view boils down to the claim that all the joy, beauty, & love in the world can’t offset even a drop of suffering in any organism anywhere. It’s a monstrously toxic & nihilistic philosophy.”

Yet the views that Miller attributes to Benatar are not views that Benatar in fact defends, and anyone familiar with Benatar’s position knows that he does not think that “only suffering counts” (cf. his rejection of the Epicurean view of death, Benatar, 2006, ch. 7).

Miller also betrays a failure to understand Benatar’s view when he writes:

The asymmetry thesis is empirically false for humans. Almost all people report net positive subjective well-being in hundreds of studies around the world. Benatar is basically patronizing everyone, saying ‘All you guys are wrong; you’re actually miserable’.

First, Benatar discusses various reasons as to why self-assessments of one’s quality of life may be unreliable (Benatar, 2006, pp. 64-69; see also Vinding, 2018). This is not fundamentally different from, say, evolutionary psychologists who argue that people’s self-reported motives may be wrong. Second, and more importantly, the main asymmetry that Benatar defends is not an empirical one, but rather an evaluative asymmetry between the presence and absence of goods versus the presence and absence of bads (Benatar, 2006, ch. 2). This evaluative asymmetry is not addressed by Miller’s claim above.

One might object that Miller’s statements have all been made on Twitter, and that tweets should generally be held to a lower standard than other forms of writing. Yet even if we grant that tweets should be held to a lower standard, we should still be clear that Miller blatantly misrepresents Benatar’s views, which is bad form on any platform and by any standard.

Moreover, one could argue that tweets should in some sense be held to a higher standard, since tweets are likely to be seen by more people compared to many other forms of writing (such as the average journal article), and perhaps also by readers who are less inclined to verify scholarly claims made by a university professor (compared to readers of other media).

More examples

Additional examples of uncharitable dismissals of suffering-focused views include statements from:

  • Writer and EA Global speaker Riva-Melissa Tez, who wrote that “anti-natalism and negative utilitarianism is true ‘hate speech’”.
  • YouTuber Robert Miles (>100k subscribers), who wrote: “Looks like it’s time for another round of ‘Principled Negative Utilitarianism or Undiagnosed Major Depressive Disorder?’” (See also here.)
  • Daniel Faggella, who wrote: “If I didn’t know so many negative utilitarians who I liked as people, I’d call it a position of literal cowardice – even vice.” (The original post was even stronger in its tone: “If I didn’t know and respect so many negative utilitarians, I would openly call it a vice, and a position of childish, seething cowardice.”)
    • I find the remark about cowardice to be quite strange, as it seems to me that it takes a lot of courage to face up to the horror of suffering, and to set out to alleviate suffering with determination. And socially, too, it can take a lot of courage to embrace strongly suffering-focused views in a social environment that often ridicules such views, and which often insinuates that there is something wrong with the adherents of these views.
  • R. N. Smart, who wrote that negative utilitarianism allows “certain absurd and even wicked moral judgments”, without providing any arguments as to whether competing moral views imply less “absurd or wicked” moral judgments, and without mentioning that classical utilitarianism — which Smart seems to express greater approval toward — has similar and arguably worse theoretical implications (cf. Knutsson, 2021; Ajantaival, 2022).

The following anecdotal example illustrates how uncharitable remarks can influence people’s motivations and make people feel unwelcome in certain communities: An acquaintance of mine who took part in an EA intro fellowship heard a fellow participant dismiss antinatalism quite uncharitably, saying something along the lines of “antinatalism is like high school atheism, but edgier”. My acquaintance thought that antinatalism is a plausible view, and the remark left them feeling unwelcome and discouraged from engaging further with effective altruism.

Conclusion

To be clear, my point is by no means that people should refrain from criticizing suffering-focused views, even in strong terms. My recommendation is simply that critics should strive to be even-handed, and to not misrepresent or unfairly malign views with which they disagree.

If we are trying to think straight about ethics, we should be keen not to let uncharitable claims and social pressures distort our thinking, especially since these factors tend to influence our views in hidden ways. After all, few people consciously think — let alone say — that social pressure exerts a strong influence on their views. Yet it is likely a potent factor all the same.

Research vs. non-research work to improve the world: In defense of more research and reflection

When trying to improve the world, we can either pursue direct interventions, such as directly helping beings in need and doing activism on their behalf, or we can pursue research on how we can best improve the world, as well as on what improving the world even means in the first place.

Of course, the distinction between direct work and research is not a sharp one. We can, after all, learn a lot about the “how” question by pursuing direct interventions, testing out what works and what does not. Conversely, research publications can effectively function as activism, and may thereby help bring about certain outcomes quite directly, even when such publications do not deliberately try to do either.

But despite these complications, we can still meaningfully distinguish more or less research-oriented efforts to improve the world. My aim here is to defend more research-oriented efforts, and to highlight certain factors that may lead us to underinvest in research and reflection. (Note that I here use the term “research” to cover more than just original research, as it also covers efforts to learn about existing research.)


Contents

  1. Some examples
    1. I. Cause Prioritization
    2. II. Effective Interventions
    3. III. Core Values
  2. The steelman case for “doing”
    1. We can learn a lot by acting
    2. Direct action can motivate people to keep working to improve the world
    3. There are obvious problems in the world that are clearly worth addressing
    4. Certain biases plausibly prevent us from pursuing direct action
  3. The case for (more) research
    1. We can learn a lot by acting — but we are arguably most limited by research insights
      1. Objections: What about “long reflection” and the division of labor?
    2. Direct action can motivate people — but so can (the importance of) research
    3. There are obvious problems in the world that are clearly worth addressing — but research is needed to best prioritize and address them
    4. Certain biases plausibly prevent us from pursuing direct action — but there are also biases pushing us toward too much or premature action
  4. The Big Neglected Question
  5. Conclusion
  6. Acknowledgments

Some examples

Perhaps the best way to give a sense of what I am talking about is by providing a few examples.

I. Cause Prioritization

Say our aim is to reduce suffering. Which concrete aims should we then pursue? Maybe our first inclination is to work to reduce human poverty. But when confronted with the horrors of factory farming, and the much larger number of non-human animals compared to humans, we may conclude that factory farming seems the more pressing issue. However, having turned our gaze to non-human animals, we may soon realize that the scale of factory farming is small compared to the scale of wild-animal suffering, which might in turn be small compared to the potentially astronomical scale of future moral catastrophes.

With so many possible causes one could pursue, it is likely suboptimal to settle on the first one that comes to mind, or to settle on any one of them without having made a significant effort considering where one can make the greatest difference.

II. Effective Interventions

Next, say we have settled on a specific cause, such as ending factory farming. Given this aim, there is a vast range of direct interventions one could pursue, including various forms of activism, lobbying to influence legislation, or working to develop novel foods that can outcompete animal products. Yet it is likely suboptimal to pursue any of these particular interventions without first trying to figure out which of them have the best expected impact. After all, different interventions may differ greatly in terms of their cost-effectiveness, which suggests that it is reasonable to make significant investments into figuring out which interventions are best, rather than to rush into action mode (although the drive to do the latter is understandable and intuitive, given the urgency of the problem).

III. Core Values

Most fundamentally, there is the question of what matters and what is most worth prioritizing at the level of core values. Our values ultimately determine our priorities, which renders clarification of our values a uniquely important and foundational step in any systematic endeavor to improve the world.

For example, is our aim to maximize a net sum of “happiness minus suffering”, or is our aim chiefly to minimize extreme suffering? While there is significant common ground between these respective aims, there are also significant divergences between them, which can matter greatly for our priorities. The first view implies that it would be a net benefit to create a future that contains vast amounts of extreme suffering as long as that future contains a lot of happiness, while the other view would recommend the path of least extreme suffering.

In the absence of serious reflection on our values, there is a high risk that our efforts to improve the world will not only be suboptimal, but even positively harmful relative to the aims that we would endorse most strongly upon reflection. Yet efforts to clarify values are nonetheless extremely neglected — and often completely absent — in endeavors to improve the world.

The steelman case for “doing”

Before making a case for a greater focus on research, it is worth outlining some of the strongest reasons in favor of direct action (e.g. directly helping other beings and doing activism on their behalf).

We can learn a lot by acting

  • The pursuit of direct interventions is a great way to learn important lessons that may be difficult to learn by doing pure research or reflection.
  • In particular, direct action may give us practical insights that are often more in touch with reality than are the purely theoretical notions that we might come up with in intellectual isolation. And practical insights and skills often cannot be compensated for by purely intellectual insights.
  • Direct action often has clearer feedback loops, and may therefore provide a good opportunity to both develop and display useful skills.

Direct action can motivate people to keep working to improve the world

  • Research and reflection can be difficult, and it is often hard to tell whether one has made significant progress. In contrast, direct action may offer a clearer indication that one is really doing something to improve the world, and it can be easier to see when one is making progress (e.g. whether people altered their behavior in response to a given intervention, or whether a certain piece of legislation changed or not).

There are obvious problems in the world that are clearly worth addressing

  • For example, we do not need to do more research to know that factory farming is bad, and it seems reasonable to think that evidence-based interventions that significantly reduce the number of beings who suffer on factory farms will be net beneficial.
  • Likewise, it is probably beneficial to build a healthy movement of people who aim to help others in effective ways, and who reflect on and discuss what “helping others” ideally entails.

Certain biases plausibly prevent us from pursuing direct action

  • It seems likely that we have a passivity bias of sorts. After all, it is often convenient to stay in one’s intellectual armchair rather than to get one’s hands dirty with direct work that may fall outside of one’s comfort zone, such as doing street advocacy or running a political campaign.
  • There might also be an omission bias at work, whereby we judge an omission to do direct work that prevents harm less harshly than an equivalent commission of harm.

The case for (more) research

I endorse all the arguments outlined above in favor of “doing”. In particular, I think they are good arguments in favor of maintaining a strong element of direct action in our efforts to improve the world. Yet they are less compelling when it comes to establishing the stronger claim that we should focus more on direct action (on the current margin), or that direct action should represent the majority of our altruistic efforts at this point in time. I do not think any of those claims follow from the arguments above.

In general, it seems to me that altruistic endeavors tend to focus far too strongly on direct action while focusing far too little on research. This is hardly a controversial claim, at least not among aspiring effective altruists, who often point out that research on cause prioritization and on the cost-effectiveness of different interventions is important and neglected. Yet it seems to me that even effective altruists tend to underinvest in research, and to jump the gun when it comes to cause selection and direct action, and especially when it comes to the values that they choose to steer by.

A helpful starting point might be to sketch out some responses to the arguments outlined in the previous section, to note why those arguments need not undermine a case for more research.

We can learn a lot by acting — but we are arguably most limited by research insights

The fact that we can learn a lot by acting, and that practical insights and skills often cannot be substituted by pure conceptual knowledge, does not rule out that our potential for beneficial impact might generally be most bottlenecked by conceptual insights.

In particular, clarifying our core values and exploring the best causes and interventions arguably represent the most foundational steps in our endeavors to improve the world, suggesting that they should — at least at the earliest stages of our altruistic endeavors — be given primary importance relative to direct action (even as direct action and the development of practical skills also deserve significant priority, perhaps even more than 20 percent of the collective resources we spend at this point in time).

The case for prioritizing direct action would be more compelling if we had a lot of research that delivered clear recommendations for direct action. But I think there is generally a glaring shortage of such research. Moreover, research on cause prioritization often reveals plausible ways in which direct altruistic actions that seem good at first sight may actually be harmful. Such potential downsides of seemingly good actions constitute a strong and neglected reason to prioritize research more — not to get perpetually stuck in research, but to at least map out the main considerations for and against various actions.

To be more specific, it seems to me that the expected value of our actions can change a lot depending on how deep our network of crucial considerations goes, so much so that adding an extra layer of crucial considerations can flip the expected value of our actions. Inconvenient as it may be, this means that our views on what constitutes the best direct actions have a high risk of being unreliable as long as we have not explored crucial considerations in depth. (Such a risk always exists, of course, yet it seems that it can at least be markedly reduced, and that our estimates can become significantly better informed even with relatively modest research efforts.)

At the level of an individual altruist’s career, it seems warranted to spend at least one year reading about and reflecting on fundamental values, one year learning about the most important cause areas, and one year learning about optimal interventions within those cause areas (ideally in that order, although one may fruitfully explore them in parallel to some extent; and such a full year’s worth of full-time exploration could, of course, be conducted over several years). In an altruistic career spanning 40 years, this would still amount to less than ten percent of one’s work time focused on such basic exploration, and less than three percent focused on exploring values in particular.

A similar argument can be made at a collective level: if we are aiming to have a beneficial influence on the long-term future — say, the next million years — it seems warranted to spend at least a few years focused primarily on what a beneficial influence would entail (i.e. clarifying our views on normative ethics), as well as researching how we can best influence the long-term future before we proceed to spend most of our resources on direct action. And it may be even better to try to encourage more people to pursue such research, ideally creating an entire research project in which a large number of people collaborate to address these questions.

Thus, even if it is ideal to mostly focus on direct action over the entire span of humanity’s future, it seems plausible that we should focus most strongly on advancing research at this point, where relatively little research has been done, and where the explore-exploit tradeoff is likely to favor exploration quite strongly.

Objections: What about “long reflection” and the division of labor?

An objection to this line of reasoning is that heavy investment into reflection is premature, and that our main priority at this point should instead be to secure a condition of “long reflection” — a long period of time in which humanity focuses on reflection rather than action.

Yet this argument is problematic for a number of reasons. First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it would seem to require strong limits to voluntary actions that diverge from the ideal of reflection. To think that we can choose to create a condition of long reflection may be an instance of the illusion of control. Human civilization is likely to develop according to its immediate interests, and seems unlikely to ever be steered via a common process of reflection.

Second, even if we were to secure a condition of long reflection, there is no guarantee that humanity would ultimately be able to reach a sufficient level of agreement regarding the right path forward — after all, it is conceivable that a long reflection could go awfully wrong, and that bad values could win out due to poor execution or malevolent agents hijacking the process.

The limited feasibility of a long reflection suggests that there is no substitute for reflecting now. Failing to clarify and act on our values from this point onward carries a serious risk of pursuing a suboptimal path that we may not be able to reverse later. The resources we spend pursuing a long reflection (which seems unlikely to ever occur) are resources not spent on addressing issues that might be more important and more time-sensitive, such as steering away from worst-case outcomes.

Another objection might be that there is a division of labor case favoring that only some people focus on research, while others, perhaps even most, should focus comparatively little on research. Yet while it seems trivially true that some people should focus more on research than others, this is not necessarily much of a reason against devoting more of our collective attention toward research (on the current margin), nor a reason against each altruist making a significant effort to read up on existing research.

After all, even if only a limited number of altruists should focus primarily on research, it still seems necessary that those who aim to put cutting-edge research into practice also spend time reading that research, which requires a considerable time investment. Indeed, even when one chooses to mostly defer to the judgments of other people, one will still need to make an effort to evaluate which people are most worth deferring to on different issues, followed by an effort to adequately understand what those people’s views and findings entail.

This point also applies to research on values in particular. That is, even if one prioritizes direct action over research on fundamental values, it still seems necessary to spend a significant amount of time reading up on other people’s work on fundamental values if one is to make a qualified judgment regarding which values one will attempt to steer by.

The division of altruistic labor is thus consistent with the recommendation that every dedicated altruist should spend at least a full year reading about and reflecting on fundamental values (just as the division of “ordinary” labor is consistent with everyone spending a certain amount of time on basic education). And one can further argue that the division of altruistic labor, and specialized work on fundamental values in particular, is only fully utilized if most people spend a decent amount of time reading up on and making use of the insights provided by others.

Direct action can motivate people — but so can (the importance of) research

While research work is often challenging and difficult to be motivated to pursue, it is probably a mistake to view our motivation to do research as something that is fixed. There are likely many ways to increase our motivation to pursue research, not least by strongly internalizing the (highly counterintuitive) importance of research.

Moreover, the motivating force provided by direct action might be largely maintained as long as one includes a strong component of direct action in one’s altruistic work (by devoting, say, 25 percent of one’s resources toward direct action).

In any case, reduced individual motivation to pursue research seems unlikely to be a strong reason against devoting a greater priority to research at the level of collective resources and priorities (even if it might play a significant role in many individual cases). This is partly because the average motivation to pursue these respective endeavors seems unlikely to differ greatly — after all, many people will be more motivated to pursue research over direct action — and partly because urgent necessities are worth prioritizing and paying for even if they happen to be less than highly motivating.

By analogy, the cleaning of public toilets is also worth prioritizing and paying for, even if it may not be the most motivating pursuit for those who do it, and the same point arguably applies even more strongly in the case of the most important tasks necessary for achieving altruistic aims such as reducing extreme suffering. Moreover, the fact that altruistic research may be unusually taxing on our motivation (e.g. due to a feeling of “analysis paralysis”) is a reason to think that such taxing research is generally neglected and hence worth pursuing on the margin.

Finally, to the extent one finds direct action more motivating than research, this might constitute a bias in one’s prioritization efforts, even if it represents a relevant data point about one’s personal fit and comparative advantage. And the same point applies in the opposite direction: to the extent that one finds research more motivating, this might make one more biased against the importance of direct action. While personal motivation is an important factor to consider, it is still worth being mindful of the tendency to overprioritize that which we consider fun and inspiring at the expense of that which is most important in impartial terms.

There are obvious problems in the world that are clearly worth addressing — but research is needed to best prioritize and address them

Knowing that there are serious problems in the world, as well as interventions that reduce those problems, does not in itself inform us about which problems are most pressing or which interventions are most effective at addressing them. Both of these aspects — roughly, cause prioritization and estimating the effectiveness of interventions — seem best advanced by research.

A similar point applies to our core values: we cannot meaningfully pursue cause prioritization and evaluations of interventions without first having a reasonably clear view of what matters, and what would constitute a better or worse world. And clarifying our values is arguably also best done through further research rather than through direct action, even as the latter may be helpful as well.

Certain biases plausibly prevent us from pursuing direct action — but there are also biases pushing us toward too much or premature action

The putative “passivity bias” outlined above has a counterpart in the “action bias”, also known as “bias for action” — a tendency toward action even when action makes no difference or is positively harmful. A potential reason behind the action bias relates to signaling: actively doing something provides a clear signal that we are at least making an effort, and hence that we care (even if the effect might ultimately be harmful). By comparison, doing nothing might be interpreted as a sign that we do not care.

There might also be individual psychological benefits explaining the action bias, such as the satisfaction of feeling that one is “really doing something”, as well as a greater feeling of being in control. In contrast, pursuing research on difficult questions can feel unsatisfying, since progress may be relatively slow, and one may not intuitively feel like one is “really doing something”, even if learning additional research insights is in fact the best thing one can do.

Political philosopher Michael Huemer similarly argues that there is a harmful tendency toward too much action in politics. Since most people are uninformed about politics, Huemer argues that most people ought to be passive in politics, as there is otherwise a high risk that they will make things worse through ignorant choices.

Whatever one thinks of the merits of Huemer’s argument in the political context, I think one should not be too quick to dismiss a similar argument when it comes to improving the long-term future — especially considering that action bias seems to be greater when we face increased uncertainty. At the very least, it seems worth endorsing a modified version of the argument that says that we should not be eager to act before we have considered our options carefully.

Furthermore, the fact that we evolved in a condition that was highly action-oriented rather than reflection-oriented, and in which action generally had far more value for our genetic fitness than did systematic research (indeed, the latter was hardly even possible), likewise suggests that we may be inclined to underemphasize research relative to how important it is for optimal impact from an impartial perspective.

This also seems true when it comes to our altruistic drives and behaviors in particular, where we have strong inclinations toward pursuing publicly visible actions that make us appear good and helpful (Hanson, 2015; Simler & Hanson, 2018, ch. 12). In contrast, we seem to have much less of an inclination toward reflecting on our values. Indeed, it seems plausible that we generally have an inclination against questioning our instinctive aims and drives — including our drive to signal altruistic intentions with highly visible actions — as well as an inclination against questioning the values held by our peers. After all, such questioning would likely have been evolutionarily costly in the past, and may still feel socially costly today.

Moreover, it is very unnatural for us to be as agnostic and open-minded as we should ideally be in the face of the massive uncertainty associated with endeavors that seek to have the best impact for all sentient beings (Vinding, 2020, sec. 9.1-9.2). This suggests that we may tend to be overconfident about — and too quick to conclude — that some particular direct action happens to be the optimal path for helping others.

Lastly, while some kind of omission bias plausibly causes us to discount the value of making an active effort to help others, it is not clear whether this bias counts more strongly against direct action than against research efforts aimed at helping others, since omission bias likely works against both types of action (relative to doing nothing). In fact, the omission bias might count more strongly against research, since a failure to do important research may feel like less of a harmful inaction than does a failure to pursue direct actions, whose connection to addressing urgent needs is usually much clearer.

The Big Neglected Question

There is one question that I consider particularly neglected among aspiring altruists — as though it occupies a uniquely impenetrable blindspot. I am tempted to call it “The Big Neglected Question”.

The question, in short, is whether anything can morally outweigh or compensate for extreme suffering. Our answer to this question has profound implications for our priorities. And yet astonishingly few people seem to seriously ponder it, even among dedicated altruists. In my view, reflecting on this question is among the first, most critical steps in any systematic endeavor to improve the world. (I suspect that a key reason this question tends to be shunned is that it seems too dark, and because people may intuitively feel that it fundamentally questions all positive and meaning-giving aspects of life — although it arguably does not, as even a negative answer to the question above is compatible with personal fulfillment and positive roles and lives.)

More generally, as hinted earlier, it seems to me that reflection on fundamental values is extremely neglected among altruists. Ozzie Gooen argues that many large-scale altruistic projects are pursued without any serious exploration as to whether the projects in question are even a good way to achieve the ultimate (stated) aims of these projects, despite this seeming like a critical first question to ponder.

I would make a similar argument, only one level further down: just as it is worth exploring whether a given project is among the best ways to achieve a given aim before one pursues that project, so it is worth exploring which aims are most worth striving for in the first place. This, it seems to me, is even more neglected than is exploring whether our pet projects represent the best way to achieve our (provisional) aims. There is often a disproportionate amount of focus on impact, and comparatively little focus on what is the most plausible aim of the impact.

Conclusion

In closing, I should again stress that my argument is not that we should only do research and never act — that would clearly be a failure mode, and one that we must also be keen to steer clear of. But my point is that there are good reasons to think that it would be helpful to devote more attention to research in our efforts to improve the world, both on moral and empirical issues — especially at this early point in time.


Acknowledgments

For helpful comments, I thank Teo Ajantaival, Tobias Baumann, and Winston Oswald-Drummond.

Priorities for reducing suffering: Reasons not to prioritize the Abolitionist Project

I discussed David Pearce’s Abolitionist Project in Chapter 13 of my book on Suffering-Focused Ethics. The chapter is somewhat brief and dense, and its main points could admittedly have been elaborated further and explained more clearly. This post seeks to explore and further explain some of these points.


A good place to start might be to highlight some of the key points of agreement between David Pearce and myself.

  • First and most important, we both agree that minimizing suffering should be our overriding moral aim.
  • Second, we both agree that we have reason to be skeptical about the possibility of digital sentience — and at the very least to not treat it as a foregone conclusion — which I note from the outset to flag that views on digital sentience are unlikely to account for the key differences in our respective views on how to best reduce suffering.
  • Third, we agree that humanity should ideally use biotechnology to abolish suffering throughout the living world, provided this is indeed the best way to minimize suffering.

The following is a summary of some of the main points I made about the Abolitionist Project in my book. There are four main points I would emphasize, none of which are particularly original (at least two of them are made in Brian Tomasik’s Why I Don’t Focus on the Hedonistic Imperative).

I.

Some studies suggest that people who have suffered tend to become more empathetic. This obviously does not imply that the Abolitionist Project is infeasible, but it does give us reason to doubt that abolishing the capacity to suffer in humans should be among our main priorities at this point.

To clarify, this is not a point about what we should do in the ideal, but more a point about where we should currently invest our limited resources, on the margin, to best reduce suffering. If we were to focus on interventions at the level of gene editing, other traits (than our capacity to suffer) seem more promising to focus on, such as increasing dispositions toward compassion. And yet interventions focused on gene editing may themselves not be among the most promising things to focus on in the first place, which leads to the next point.

II.

For even if we grant that the Abolitionist Project should be our chief aim, at least in the medium term, it still seems that the main bottleneck to its completion is found not at the technical level, but rather at the level of humanity’s values and willingness to do what would be required. I believe this is also a point that David and I mostly agree on, as he has likewise hinted, in various places, that the main obstacle to the Abolitionist Project will not be technical, but sociopolitical. This would give us reason to mostly prioritize the sociopolitical level on the margin — especially humanity’s values and willingness to reduce suffering. And the following consideration provides an additional reason in favor of the same conclusion.

III.

The third and most important point relates to the distribution of future (expected) suffering, and how we can best prevent worst-case outcomes. Perhaps the most intuitive way to explain this point is with an analogy to tax revenues: if one were trying to maximize tax revenues, one should focus disproportionately on collecting taxes from the richest people rather than the poorest, simply because that is where most of the money is.

The visual representation of the income distribution in the US in 2019 found below should help make this claim more intuitive.



The point is that something similar plausibly applies to future suffering: in terms of the distribution of future (expected) suffering, it seems reasonable to give disproportionate focus to the prevention of worst-case outcomes, as they contain more suffering (in expectation).

Futures in which the Abolitionist Project is completed, and in which our advocacy for the Abolitionist Project helps bring on its completion, say, a century sooner, are almost by definition not the kinds of future scenarios that contain the most suffering. That is, they are not worst-case futures in which things go very wrong and suffering gets multiplied in an out-of-control fashion.

Put more generally, it seems to me that advocating for the Abolitionist Project is not the best way to address worst-case outcomes, even if we assume that such advocacy has a positive effect in this regard. A more promising focus, it seems to me, is again to increase humanity’s overall willingness and capacity to reduce suffering (the strategy that also seems most promising for advancing the Abolitionist Project itself). And this capacity should ideally be oriented toward the avoidance of very bad outcomes — outcomes that to me seem most likely to stem from bad sociopolitical dynamics.

IV.

Relatedly, a final critical point is that there may be some downsides to framing our goal in terms of abolishing suffering, rather than in terms of minimizing suffering in expectation. One reason is that the former framing may invoke our proportion bias, or what is known in the literature as proportion dominance: our tendency to intuitively care more about helping 10 out of 10 individuals rather than helping 10 out of 100, even though the impact is in fact the same.

Minimizing suffering in expectation would entail abolishing suffering if that were indeed the way to minimize suffering in expectation, but the point is that it might not be. For instance, it could be that the way to reduce the most suffering in expectation is to instead mostly focus on reducing the probability and mitigating the expected badness of worst-case outcomes. And framing our aim in terms of abolishing suffering, rather than the more general and neutral terms of minimizing suffering in expectation, can hide this possibility somewhat. (I say a bit more about this in Section 13.3 in my book; see also this section.)

Moreover, talking about the complete abolition of suffering can leave the broader aim of reducing suffering particularly vulnerable to objections — e.g. the objection that completely abolishing suffering seems risky in a number of ways. In contrast, the aim of reducing intense suffering is much less likely to invite such objections, and is more obviously urgent and worthy of priority. This is another strategic reason to doubt that the abolitionist framing is optimal.

Lastly, it would be quite a coincidence if the actions that maximize the probability of the complete abolition of suffering were also exactly those actions that minimize extreme suffering in expectation; even as these goals are related, they are by no means the same. And hence to the extent that our main goal is to minimize extreme suffering, we should probably frame our objective in these terms rather than in abolitionist terms.

Reasons in favor of prioritizing the Abolitionist Project

To be clear, there are also things to be said in favor of an abolitionist framing. For instance, many people will probably find a focus on the mere alleviation and reduction of suffering to be too negative and insufficiently motivating, leading them to disengage and drop out. Such people may find it much more motivating if the aim of reducing suffering is coupled with an inspiring vision about the complete abolition of suffering and increasingly better states of superhappiness.

As a case in point, I think my own focus on suffering was in large part inspired by the Abolitionist Project and the The Hedonistic Imperative, which gradually, albeit very slowly, eased my optimistic mind into prioritizing suffering. Without this light and inspiring transitional bridge, I may have remained as opposed to suffering-focused views as I was eight years ago, before I encountered David’s work.

Brian Tomasik writes something similar about the influence of these ideas: “David Pearce’s The Hedonistic Imperative was very influential on my life. That book was one of the key factors that led to my focus on suffering as the most important altruistic priority.”

Likewise, informing people about technologies that can effectively reduce or even abolish certain forms of suffering, such as novel gene therapies, may give people hope that we can do something to reduce suffering, and thus help motivate action to this end.

But I think the two reasons cited above count more as reasons to include an abolitionist perspective in our “communication portfolio”, as opposed to making it our main focus — not least in light of the four considerations mentioned above that count against the abolitionist framing and focus.

A critical question

The following question may capture the main difference between David’s view and my own.

In previous conversations, David and I have clarified that we both accept that the avoidance of worst-case outcomes is, plausibly, the main priority for reducing suffering in expectation.

This premise, together with our shared moral outlook, seems to recommend a strong focus on minimizing the risk of worst-case outcomes. The critical question is thus: What reasons do we have to think that prioritizing and promoting the Abolitionist Project is the single best way, or even among the best ways, to address worst-case outcomes?

As noted above, I think there are good reasons to doubt that advocating the Abolitionist Project is among the most promising strategies to this end (say, among the top 10 causes to pursue), even if we grant that it has positive effects overall, including on worst-case outcomes in particular.

Possible responses

Analogy to smallpox

A way to respond may be to invoke the example of smallpox: Eradicating smallpox was plausibly the best way to minimize the risk of “astronomical smallpox, as opposed to focusing on other, indirect measures. So why should the same not be true in the case of suffering?

I think this is an interesting line of argument, but I think the case of smallpox is disanalogous in at least a couple of ways. First, smallpox is in a sense a much simpler and more circumscribed phenomenon than is suffering. In part for this reason, the eradication of smallpox was much easier than the abolition of suffering would be. As an infectious disease, smallpox, unlike suffering, has not evolved to serve any functional role in animals. It could thus not only be eradicated more easily, but also without unintended effects on, say, the function of the human mind.

Second, if we were primarily concerned about not spreading smallpox to space, and minimizing “smallpox-risks” in general, I think it is indeed plausible that the short-term eradication of smallpox would not be the ideal thing to prioritize with marginal resources. (Again, it is important to here distinguish what humanity at large should ideally do versus what the, say, 1,000 most dedicated suffering reducers should do with most of their resources, on the margin, in our imperfect world.)

One reason such a short-term focus may be suboptimal is that the short-term eradication of smallpox is already — or would already be, if it still existed — prioritized by mainstream organizations and governments around the world, and hence additional marginal resources would likely have a rather limited counterfactual impact to this end. Work to minimize the risk of spreading life forms vulnerable to smallpox is far more neglected, and hence does seem a fairly reasonable priority from a “smallpox-risk minimizing” perspective.

Sources of unwillingness

Another response may be to argue that humanity’s unwillingness to reduce suffering derives mostly from the sense that the problem of suffering is intractable, and hence the best way to increase our willingness to alleviate and prevent suffering is to set out technical blueprints for its prevention. In David’s words, we can have a serious ethical debate about the future of sentience only once we appreciate what is — and what isn’t — technically feasible.

I think there is something to be said in favor of this argument, as noted above in the section on reasons to favor the Abolitionist Project. Yet unfortunately, my sense is that humanity’s unwillingness to reduce suffering does not primarily stem from a sense that the problem is too vast and intractable. Sadly, it seems to me that most people give relatively little thought to the urgency of (others’) suffering, especially when it comes to the suffering of non-human beings. As David notes, factory farming can be said to be “the greatest source of severe and readily avoidable suffering in the world today”. Ending this enormous source of suffering is clearly tractable at a collective level. Yet most people still actively contribute to it rather than work against it, despite its solution being technically straightforward.

What is the best way to motivate humanity to prevent suffering?

This is an empirical question. But I would be surprised if setting out abolitionist blueprints turned out to be the single best strategy. Other candidates that seem more promising to me include informing people about horrific examples of suffering, as well as presenting reasoned arguments in favor of prioritizing the prevention of suffering.

To clarify, I am not arguing for any efforts to conserve suffering. The issue here is rather about what we should prioritize with our limited resources. The following analogy may help clarify my view: When animal advocates argue in favor of prioritizing the suffering of farm animals or wild animals rather than, say, the suffering of companion animals, they are not thereby urging us to conserve let alone increase the suffering of companion animals. The argument is rather that our limited resources seem to reduce more suffering if we spend them on these other things, even as we grant that it is a very good thing to reduce the suffering of companion animals.

In terms of how we rank the cost-effectiveness of different causes and interventions (cf. this distribution), I would still consider abolitionist advocacy to be quite beneficial all things considered, and probably significantly better than the vast majority of activities that we could pursue. But I would not quite rank it at the tail-end of the cost-effectiveness distribution, for some of the reasons outlined above.

Antinatalism and reducing suffering: A case of suspicious convergence

First published: Feb. 2021. Last update: Dec. 2022


Two positions are worth distinguishing. One is the view that we should reduce (extreme) suffering as much as we can for all sentient beings. The other is the view that we should advocate for humans not to have children.

It may seem intuitive to think that the former position implies the latter. That is, to think that the best way to reduce suffering for all sentient beings is to advocate for humans not to have children. My aim in this brief essay is to outline some of the reasons to be skeptical of this claim.

Suspicious convergence

Lewis, 2016 warns of “suspicious convergence”, which he introduces with the following toy example:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

The general point is that, for any set of distinct altruistic aims or endeavors we may consider, we should be a priori suspicious of the claim that they are perfectly convergent — i.e. that directly pursuing one of them also happens to be the very best thing we can do for achieving the other. Justifying such a belief would require good, object-level reasons. And in the case of the respective endeavors of reducing suffering and advocating for humans not to procreate, we in a sense find the opposite, as there are good reasons to be skeptical of a strong degree of convergence, and even to think that such antinatalist advocacy might increase future suffering.

The marginal impact of antinatalist advocacy

A key point when evaluating the impact of altruistic efforts is that we need to think at the margin: how does our particular contribution change the outcome, in expectation? This is true whether our aims are modest or maximally ambitious — our actions and resources still represent but a very small fraction of the total sum of actions and resources, and we can still only exert relatively small pushes toward our goals.

Direct effects

What, then, is the marginal impact of advocating for people not to have children? One way to try to answer this question is to explore the expected effects of preventing a single human birth. Antinatalist analyses of this question are quick to point out the many harms caused by a single human birth, which must indeed be considered. Yet what these analyses tend not to consider are the harms that a human birth would prevent.

For example, in his book Better Never to Have Been, David Benatar writes about “the suffering inflicted on those animals whose habitat is destroyed by encroaching humans” (p. 224) — which, again, should definitely be included in our analysis. Yet he fails to consider the many births and all the suffering that would be prevented by an additional human birth, such as due to its marginal effects on habitat reduction (“fewer people means more animals“). As Brian Tomasik argues, when we consider a wider range of the effects humans have on animal suffering, “it seems plausible that encouraging people to have fewer children actually causes an increase in suffering and involuntary births.” 

This highlights how a one-sided analysis such as Benatar’s is deeply problematic when evaluating potential interventions. We cannot simply look at the harms prevented by our pet interventions without considering how they might lead to more harm. Both things must be considered.

To be clear, the considerations above regarding the marginal effects of human births on animal suffering by no means represent a complete analysis of the effects of additional human births, or of advocating for humans not to have children. But they do represent reasons to doubt that such advocacy is among the very best things we can do to reduce suffering for all sentient beings, at least in terms of the direct effects, which leads us to the next point.

Long-term effects

Some seem to hold that the main reason to advocate against human procreation is not the direct effects, but rather its long-term effects on humanity’s future. I agree that the influence our ideas and advocacy efforts have on humanity’s long-term future are plausibly the most important thing about them, and I think many antinatalists are likely to have a positive influence in this regard by highlighting the moral significance of suffering (and the relative insignificance of pleasure).

But the question is why we should think that the best way to steer humanity’s long-term future toward less suffering is to argue for people not to have children. After all, the space of possible interventions we could pursue to reduce future suffering is vast, and it would be quite a remarkable coincidence if relatively simple interventions — such as advocating for antinatalism or veganism — happened to be the very best way to reduce suffering, or even among the very best ways.

In particular, the greatest risk from a long-term perspective is that things somehow go awfully wrong, and that we counterfactually greatly increase future suffering, either by creating additional sources of suffering in the future, or by simply failing to reduce existing forms of suffering when we could. And advocating for people not to have children seems unlikely to be among the best ways to reduce the risk of such failures — again since the space of possible interventions is vast, and interventions that are targeted more directly at reducing these risks, including the risk of leaving wild-animal suffering unaddressed, are probably significantly more effective than is advocating for humans not to procreate.

Better alternatives?

If our aim is to reduce suffering for all sentient beings, a plausible course of action would be to pursue an open-ended research project on how we can best achieve this aim. This is, after all, not a trivial question, and we should hardly expect the most plausible answers to be intuitive, let alone obvious. Exploring this question requires epistemic humility, and forces us to contend with the vast amount of empirical uncertainty that we are facing.

I have explored this question at length in Vinding, 2020, as have other individuals and organizations elsewhere. One conclusion that seems quite robust is that we should focus mostly on avoiding bad outcomes, whereas comparatively suffering-free future scenarios merit less priority. Another robust conclusion is that we should pursue a pragmatic and cooperative approach when trying to reduce suffering (see also Vinding, 2020, ch. 10) — not least since future conflicts are one of the main ways in which worst-case outcomes might materialize, and hence we should generally strive to reduce the risk of such conflicts.

In more concrete terms, antinatalists may be more effective if they focus on defending antinatalism for wild animals in particular. This case seems both easier and more important to make given the overwhelming amount of suffering and early death in nature. Such advocacy may both have more beneficial near-term and long-term effects, being less at risk of increasing non-human suffering in the near term, and plausibly being more conducive to reducing worst-case risks, whether these entail spreading non-human life or simply failing to reduce wild-animal suffering.

Broadly speaking, the aim of reducing suffering would seem to recommend efforts to identify the main ways in which humanity might cause — or prevent — vast amounts of suffering in the future, and to find out how we can best navigate accordingly. None of these conclusions seem to support efforts to convince people not to have children as a particularly promising strategy, though they likely do recommend efforts to promote concern for suffering more generally.

Underappreciated consequentialist reasons to avoid consuming animal products

While there may be strong deontological or virtue-ethical reasons to avoid consuming animal products (“as far as is possible and practicable”), the consequentialist case for such avoidance is quite weak.

Or at least this appears to be a common view in some consequentialist-leaning circles. My aim in this post is to argue against this view. On a closer look, we find many strong consequentialist reasons to avoid the consumption of animal products.

The direct effects on the individuals we eat

99 percent of animals raised for food in the US, and more than 90 percent globally, live out their lives on factory farms. These are lives of permanent confinement to very small spaces, often involving severe abuse, as countless undercover investigations have revealed. And their slaughter frequently involves extreme suffering as well — for example, about a million chickens and turkeys are boiled alive in the US every year, and fish, the vast majority of farmed vertebrates, are usually slaughtered without any stunning. They are routinely suffocated to death, frozen to death, and cut in ways that leave them to bleed to death (exsanguination). 

Increasing such suffering via one’s marginal consumption is bad on virtually all consequentialist views. And note that, empirically, it turns out that people who aspire to avoid meat from factory farmed animals (“conscientious omnivores”) actually often do not (John & Sebo, 2020, 3.2; Rothgerber, 2015). And an even greater discrepancy between ideals and actuality is found in the behavior of those who believe that the animals they eat are “treated well”, which in the US is around 58 percent of people, despite the fact that over 99 percent of farm animals in the US live on factory farms (Reese, 2017).

Furthermore, even in Brian Tomasik’s analyses that factor in the potential of animal agriculture to reduce wild-animal suffering, the consumption of virtually all animal “products” is recommended against — including eggs and meat from fish (farmed and wild-caught), chickens, pigs, and (especially) insects. Brian argues that the impact of not consuming meat is generally positive, both because of the direct marginal impact (“avoiding eating one chicken or fish roughly translates to one less chicken or fish raised and killed”) and because of the broader social effects (more on the latter below).

The above is an important consequentialist consideration against consuming animal products. Yet unfortunately, consequentialist analyses tend to give far too much weight to this consideration alone, and to treat it as the end-all be-all of consequentialist arguments against consuming animal products when, in fact, it is not necessarily even one of the most weighty arguments.

Institutional effects

Another important consideration has to do with the institutional effects of animal consumption. These effects seem superficially similar to those discussed in the previous point, yet they are in fact quite distinct.

Anti-charity

For one, there is the increased financial support to an industry that not only systematically harms currently existing individuals, but which also, perhaps more significantly, actively works to undermine moral concern for future non-human individuals. It does this through influential lobbying activities and by advertising in ways that effectively serve as propaganda against non-human animals (that is certainly what we would call it in the human case if an industry continually worked to legitimize the exploitation and killing of certain human individuals; in fact, “propaganda” may be overly euphemistic).

Supporting this industry can be seen as anti-charity of sorts, as it pushes us away from betterment for non-human animals at the level of our broader institutions. And this effect could well be more significant than the direct marginal impact on non-human beings consumed, as such institutional factors may be a greater determinant of how many such beings will suffer in the future.

Not only are these institutional effects negative for future farmed animals, but the resulting reinforcement of speciesism and apathy toward non-human animals in general likely also impedes concern for wild animals in particular. And given the numbers, this effect may be even more important than the negative effect on future farmed animals.

Anti-activism

Another institutional effect is that, when we publicly buy or consume animal products, we signal to other people that non-human individuals can legitimately be viewed as food, and that we approve of the de facto horrific institution of animal agriculture. This signaling effect is difficult to avoid even if we do not in fact condone most of the actual practices involved. After all, virtually nobody condones the standard practices, such as the castration of pigs without anesthetics. And yet virtually all of us still condone these practices behaviorally, and indeed effectively support their continuation.

In this way, publicly buying or consuming animal products can, regardless of one’s intentions, end up serving as miniature anti-activism against the cause of reducing animal suffering — it serves to normalize a collectively perpetrated atrocity — while choosing to forego such products can serve as miniature activism in favor of the cause.

One may object that the signaling effects of such individual actions are insignificant. Yet we are generally not inclined to say the same about the signaling effects of, say, starkly racist remarks, even when the individuals whom the remarks are directed against will never know about them (e.g. when starkly anti-black sentiments are shared in forums with white people only). The reason, I think, is that we realize that such remarks do have negative effects down the line, and we realize that these effects are not minor.

It is widely acknowledged that, to human psychology, racism is a ticking bomb that we should make a consistent effort to steer away from, lest we corrode our collective attitudes and in turn end up systematically exploiting and harming certain groups of individuals. We have yet to realize that the same applies to speciesism.

For a broader analysis of the social effects of the institution of animal exploitation, see (John & Sebo, 2020, 3.3). Though note that I disagree with John and Sebo’s classical utilitarian premise, which would allow us to farm individuals, and even kill them in the most horrible ways, provided that their lives were overall “net positive” (the horrible death included). I think this notion of “net positive” needs to be examined at length, especially in the interpersonal context where some beings’ happiness is claimed to outweigh the extreme suffering of others.

Influence on our own perception

The influence on our own attitudes and thinking is another crucial factor. Indeed, for a consequentialist trying to think straight about how to prioritize one’s resources for optimal impact, this may be the most important reason not to consume animal products.

Moral denigration is a well-documented effect

Common sense suggests that we cannot think clearly about the moral status of a given group of individuals as long as we eat them. Our evolutionary history suggests the same: it was plausibly adaptive in our evolutionary past to avoid granting any considerable moral status to individuals categorized as “food animals”.

Psychological studies bear out common sense and evolution-based speculation. In Don’t Mind Meat? The Denial of Mind to Animals Used for Human Consumption, Brock Bastian and colleagues demonstrated that people tend to ascribe diminished mental capacities to “food animals”; that “meat eaters are motivated to deny minds to food animals when they are reminded of the link between meat and animal suffering”; and that such mind denial is increased when people expect to eat meat in the near future.

Another study (Bratanova et al., 2011) found that:

categorization as food — but not killing or human responsibility — was sufficient to reduce the animal’s perceived capacity to suffer, which in turn restricted moral concern.

This finding is in line with the prevalence of so-called consistency effects, our psychological tendency to adapt beliefs that support our past and present behavior (see Salamon & Rayhawk’s Cached Selves and Huemer, 2010, “5.d Coherence bias”). For example, “I eat animals, and hence animals don’t suffer so much and don’t deserve great moral consideration”. 

And yet another study (Loughnan et al., 2010) found that the moral numbing effects of meat eating applied to other non-human animals as well, suggesting that these numbing effects may extend to wild animals:

Eating meat reduced the perceived obligation to show moral concern for animals in general and the perceived moral status of the [animal being eaten].

(See also Jeff Sebo’s talk A utilitarian case for animal rights and John & Sebo, 2020, 3.2.)

These studies confirm a point that a number of philosophers have been trying to convey for a while (see John & Sebo, 2020, 3.2 for a brief review). Here is Peter Singer in Practical Ethics (as quoted in ibid.):

it would be better to reject altogether the killing of animals for food, unless one must do so to survive. Killing animals for food makes us think of them as objects that we can use as we please …

And such objectification, in turn, has horrendous consequences. This is usually quite obvious in the human case: few people are tempted to claim that it would be inconsequential if we began eating a given group of humans, even if we stipulated that these humans had the same mental abilities as, say, pigs. Singer’s point about objectification is obvious to most people in this case, and most consequentialists would probably say that raising, killing, and eating humans could only be recommended by very naive and incomplete consequentialist analyses detached from the real world — not least the realities of human psychology. Yet the same ought to be concluded when the beings in question possess not just the minds but also the bodies of pigs.

Relatedly, in the hypothetical case where systematic exploitation of certain humans is the norm, few consequentialists would be tempted to say that abstention from the consumption of human products (e.g. human body parts or forcefully obtained breast milk) is insignificant, or say that it is not worth sticking with it because other things are more important. For on reflection, when we put on the more sophisticated consequentialist hat, we realize that such abstention probably is an important component of the broader set of actions that constitutes the ethically optimal path forward. The same ought to be concluded, I submit, in the non-human case.

Note, finally, that even if we believed ourselves to be exceptions to all of the psychological tendencies reviewed above — a belief we should be skeptical of given the prevalence of illusory superiority — it would still be hypocritical and a failure of integrity if we ourselves did not follow a norm that we would recommend others to follow. And consequentialists have good reasons to show high integrity.

Self-serving biases

This is more of a meta consideration suggesting that 1) we should be skeptical of convenient conclusions, and 2) we should adhere to stricter principles than a naive consequentialist analysis might imply.

A good reason to adhere to reasonably strict principles is that, if we loosen our principles and leave everything up for case-by-case calculation, we open the door for biases to sneak in.

As Jamie Mayerfeld writes in Suffering and Moral Responsibility (p. 121):

An agent who regarded [sound moral principles] as mere rules of thumb would ignore them whenever she calculated that compliance wasn’t necessary to minimize the cumulative badness of suffering. The problem is that it might also be in her own interest to violate these principles, and self-interest could distort her calculations, even when she calculated sincerely. She could thus acquire a pattern of violating the principles even when compliance with them really was necessary to prevent the worst cumulative suffering. To avoid this, we would want her to feel strongly inhibited from violating the principles. Inhibitions of this kind can insulate agents from the effect of biased calculations.

And there are indeed many reasons to think that our “calculations” are strongly biased against concern for non-human individuals and against the conclusion that we should stop consuming them. For example, there is the fact that people who do not consume animal products face significant stigma — for example, one US study found that people tended to evaluate vegans more negatively than other minority groups, such as atheists and homosexuals; “only drug addicts were evaluated more negatively than vegetarians and vegans”. And a recent study suggested that fear of stigmatization is among the main reasons why people do not want to stop eating animal products. Yet fear of stigmatization is hardly, on reflection, a sound moral reason to eat animal products.

A more elaborate review of relevant biases can be found in (Vinding, 2018, “Bias Alert: We Should Expect to Be Extremely Biased”; Vinding, 2020, 11.5).

Human externalities

Defenses of the consumption of non-human individuals often rest on strongly anthropocentric values (which cannot be justified). But even on such anthropocentric terms, a surprisingly strong case can in fact be made against animal consumption given the negative effects animal agriculture has on human health — effects that individual consumption will also contribute to on the margin.

First, as is quite salient these days, animal agriculture significantly increases the risk of zoonotic diseases. Many of the most lethal diseases of the last century were zoonotic diseases that spread to humans due to animal agriculture and/or animal consumption, including the 1918 flu (50-100 million deaths), AIDS (30-40 million deaths), the Hong Kong flu (1-4 million deaths), and the 1957-1958 flu (1-4 million deaths). The same is true of the largest epidemics so far in this century, such as SARS, Ebola, COVID-19, and various bird and swine flus.

As noted in (Babatunde, 2011):

A remarkable 61 percent of all human pathogens, and 75 percent of new human pathogens, are transmitted by animals, and some of the most lethal bugs affecting humans originate in our domesticated animals.

Antibiotic resistance is another health problem exacerbated by animal agriculture. Each year in the US, more than 35,000 people die from antibiotic-resistant infections, which is more than twice the annual number of US gun homicides. And around 80 percent of all antibiotics used in the US are given to non-human animals — often simply to promote growth rather than to fight infections. In other words, animal agriculture is a key contributor to antibiotic resistance.

The environmental effects of animal agriculture represent another important factor, or rather set of factors. There is pollution — “ammonia pollution linked to U.S. farming may impose human health costs that are greater than the profits earned by agricultural exports”. There are greenhouse gases contributing significantly to climate change. There is nitrate contamination of the groundwater from manure:

The EPA found that nitrates are the most widespread agricultural contaminant in drinking water wells and estimates that 4.5 million people [in the US] are exposed to elevated nitrate levels from drinking water wells. Nitrates, if they find their way into the groundwater, can potentially be fatal to infants.

Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectively, for the sake of not supporting industries that actively promote poor human nutrition in general, as well as individually, to maximize one’s own health so one can be more effectively altruistic.

A more thorough review of the human costs of animal agriculture are found in (Vinding, 2014, ch. 2).

In sum, one could argue that we also have a strong obligation to our fellow humans to avoid contributing to the various human health problems and risks caused by animal agriculture.

Both/And

What I have said above may seem in tension with the common consequentialist critique that says that animal advocates focus too much on individual consumer behavior. Yet in reality, there is no tension. It is both true, I submit, that avoiding the consumption of animal products is important (in purely consequentialist terms) and that most animal advocates focus far too much on individual consumer change compared to institutional change and wild-animal suffering. The latter point does not negate the former (the same view is expressed in John & Sebo, 2020, 3.3).

Blog at WordPress.com.

Up ↑