Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part. We find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

Four reasons to help nearby insects even if your main priority is to reduce s-risks

When trying to reduce suffering in effective ways, one can easily get pulled toward an abstract focus that pertains almost exclusively to speculative far future scenarios. There are, to be sure, good reasons to work to reduce risks of astronomical future suffering, or s-risks. Yet even if the reduction of s-risks is our main priority, there are still compelling reasons to also focus on helping beings in our immediate surroundings, such as insects and other small beings. My aim in this post is to list a few of these reasons.

I should note that most of the points I make in this essay pertain to all sentient beings who may cross our paths — not just insects — but I still want to emphasize insects because we encounter them so often, and because they tend to be uniquely at our mercy.


  1. Helping nearby insects is often trivially cheap and worthwhile
  2. Helping nearby insects can reinforce our dedication and commitment to suffering reduction
  3. Helping nearby insects can help foster greater concern for neglected beings
  4. Helping nearby insects can prevent suffering reduction from becoming unduly abstract
  5. Further reading

Helping nearby insects is often trivially cheap and worthwhile

Perhaps the main argument against helping beings in our vicinity, from an altruistic perspective, is that the opportunity costs are too high in terms of what we could be doing to help future beings.

I think this is an important argument, as the opportunity costs are surely worth keeping in mind, and they can indeed be high. But that being said, it is also true that many efforts to help insects in our vicinity are extremely cheap, and thus carry practically no costs to our efforts to reduce future suffering. Indeed, as I will argue below, efforts to help beings in our vicinity may generally have beneficial effects on our efforts to reduce future suffering.

Helping nearby insects can reinforce our dedication and commitment to suffering reduction

Small-scale acts of beneficence toward insects can plausibly help to reinforce a sense of commitment to the reduction of suffering, including the reduction of s-risks. Specifically, such small-scale acts may help strengthen a sense of self-identity that is centered on suffering reduction as a core purpose, and pursuing these compassionate acts may be seen as a uniquely tangible step in line with that purpose — a concrete step toward a world with less suffering.

Helping nearby insects can help foster greater concern for neglected beings

In addition to fostering greater concern for suffering, efforts to help insects may likewise foster greater concern for small beings in particular. This is important since small beings such as insects are extremely numerous and neglected, and also since a large fraction of future suffering is likely to occur in similarly exotic sentient minds.

Thus, when we perform small-scale actions that are aligned with a concern for tiny creatures with foreign minds, we plausibly make ourselves better able to take the interests of such minds seriously as a general matter. Conversely, if we act in ways that disregard these beings, we may be more inclined to rationalize the harms that befall them (by analogy to how eating certain animals can lead us to deny and disregard their sentience and their interests).

Helping nearby insects can prevent suffering reduction from becoming unduly abstract

A danger of working to reduce far future suffering is that it can end up resembling a game of speculative abstractions that have little to do with real-world suffering and real-world efforts to help others. To be clear, I think theoretical work on how we can best reduce future suffering is extremely important, and I have argued elsewhere that research is often more important than direct action in efforts to improve the world. Yet there is nevertheless a risk that such research-related work ends up being overly abstract, and that the reduction of suffering ends up being a problem that we mostly think and talk about, as opposed to it being a problem that we are above all striving to do something about.

Efforts to prevent harm to nearby insects may help reinforce this action-centered approach. In particular, it can help firmly establish the reduction of suffering as something that we do, and not something that we can only achieve in meaningful ways by thinking about risks of future suffering.

This argument in effect turns on its head a common objection against a focus on helping insects, since it is sometimes objected that a focus on helping insects is unduly abstract and detached. Yet there is nothing inherently abstract or detached about helping insects. It can be quite the opposite.

Further reading

Some suggestions on how to reduce the suffering of insects, including the suffering of insects in our vicinity, can be found here. As the author stresses toward the end, it is important not to make the mistake of spending too much effort on these suggestions; it is indeed critical to keep competing priorities and opportunity costs in mind.

For some considerations on prioritizing short-term versus long-term suffering, see:

Some pitfalls of utilitarianism

My aim in this post is to highlight and discuss what I consider to be some potential pitfalls of utilitarianism. These are not necessarily pitfalls that undermine utilitarianism at a theoretical level (although some of them might also pose a serious challenge at that level). As I see them, they are more pitfalls at the practical level, relating to how utilitarianism is sometimes talked about, thought about, and acted on in ways that may be suboptimal by the standards of utilitarianism itself.

I should note from the outset that this post is not inspired by recent events involving dishonest and ruinous behavior by utilitarian actors; I have been planning to write this post for a long time. But recent events arguably serve to highlight the importance of some of the points I raise below.


  1. Restrictive formalisms and “formalism first”
  2. Risky and harmful decision procedures
    1. Allowing speculative expected value calculations to determine our actions
    2. Underestimating the importance of emotions, virtues, and other traits of moral actors
    3. Uncertainty-induced moral permissiveness
    4. Uncertainty-induced lack of moral drive
    5. A more plausible approach
  3. The link between utilitarian judgments and Dark Triad traits: A cause for reflection
  4. Acknowledgments

Restrictive formalisms and “formalism first”

A potential pitfall of utilitarianism, in terms of how it is commonly approached, is that it can make us quick to embrace certain formalisms and conclusions, as though we have to accept them on pain of mathematical inconsistency.

Consider the following example: Alice is a utilitarian who thinks that a certain mildly enjoyable experience, x, has positive value. On Alice’s view, it is clear that no number of instances of x would be worse than a state of extreme suffering, since a state of extreme suffering and a mildly enjoyable experience are completely different categories of experience. Over time, Alice reads about different views of wellbeing and axiology, and she eventually changes her position such that she finds it more plausible that no experiential states are above a neutral state, and that no states have intrinsic positive value (i.e. she comes to embrace a minimalist axiology).

Alice thus no longer considers it plausible to assign positive value to experience x, and instead now assigns mildly negative value to the experience (e.g. because the experience is not entirely flawless; it contains some bothersome disturbances). Having changed her mind about the value of experience x, Alice now feels mathematically compelled to say that sufficiently many instances of that experience are worse than any experience of extreme suffering, even though she finds this implausible on its face — she still thinks state x and states of extreme suffering belong to wholly different categories of experience.

To be clear, the point I am trying to make here is not that the final conclusion that Alice draws is implausible. My point is rather that certain prevalent ways of formalizing value can make people feel needlessly compelled to draw particular conclusions, as though there are no coherent alternatives, when in fact there are. More generally, there may be a tendency to “put formalism first”, as it were, rather than to consider substantive plausibility first, and to then identify a coherent formalism that fits our views of substantive plausibility.

Note that the pitfall I am gesturing at here is not one that is strictly implied by utilitarianism, as one can be a utilitarian yet still reject standard formalizations of utilitarianism. But being bound to a restrictive formalization scheme nevertheless seems common, in my experience, among those who endorse or sympathize with utilitarianism.

Risky and harmful decision procedures

A standard distinction in consequentialist moral theory is that between ‘consequentialist criteria of rightness’ and ‘consequentialist decision procedures’. One might endorse a consequentialist criterion of rightness — meaning that consequences determine whether a given action is right or wrong — without necessarily endorsing consequentialist decision procedures, i.e. decision procedures in which one decides how to act based on case-by-case calculations of the expected outcomes.

Yet while this distinction is often emphasized, it still seems that utilitarianism is prone to inspire suboptimal decision procedures, also by its own standards (as a criterion of rightness). The following are a few of the ways in which utilitarianism can inspire suboptimal decision procedures, attitudes, and actions by its own standards.

Allowing speculative expected value calculations to determine our actions

A particular pitfall is to let our actions be strongly determined by speculative expected value calculations. There are various reasons why this may be suboptimal by utilitarian standards, but an important one is simply that the probabilities that go into such calculations are likely to be inaccurate. If our probability estimates on a given matter are highly uncertain and likely to change a lot as we learn more, there is a large risk that it is suboptimal to make any strong bets on our current estimates.

The robustness of a given probability estimate is thus a key factor to consider when deciding whether to act on that estimate, yet it can be easy to neglect this factor in real-world decisions.

Underestimating the importance of emotions, virtues, and other traits of moral actors

A related pitfall is to underestimate the significance of emotions, attitudes, and virtues. Specifically, if we place a strong emphasis on the consequences of actions, we might in turn be inclined to underemphasize the traits and dispositions of the moral actors themselves. Yet the traits and dispositions of moral actors are often critical to emphasize and to actively develop if we are to create better outcomes. Our cerebral faculties and our intuitive attitudinal faculties can both be seen as tools that enable us to navigate the world, and the latter are often more helpful for creating desired outcomes than the former (cf. Gigerenzer, 2001).

A specific context in which I and others have tried to argue for the importance of underlying attitudes and traits, in contrast to mere cerebral beliefs, is when it comes to animal ethics. In particular, engaging in practices that are transparently harmful and exploitative toward non-human beings is harmful not only in terms of how it directly contributes to those specific exploitative practices, but also in terms of how it shapes our emotions, attitudes, and traits — and thus ultimately our behavior — going forward.

More generally, to emphasize outcomes while placing relatively little emphasis on the traits of humans, as moral actors, seems to overlook the largely habitual and disposition-based nature of human behavior. After all, our emotions and attitudes not only play important roles in our individual motivations and actions, but also in the social incentives that influence the behavior of others (cf. Haidt, 2001).

In short, if one embraces a consequentialist criterion of rightness, it seems that there are good reasons to cultivate the temperament of a virtue ethicist and the felt attitudes of a non-consequentialist who finds certain actions unacceptable in practically all situations.

Uncertainty-induced moral permissiveness

Another pitfall is to practically surrender one’s capacity for moral judgment due to uncertainty about long-term outcomes. In its most extreme manifestations, this might amount to declaring that we do not know whether people who committed large-scale atrocities in the past acted wrongly, since we do not know the ultimate consequences of those actions. But perhaps a more typical manifestation is to fail to judge, let alone oppose, ongoing harmful actions and intolerant values (e.g. clear cases of discrimination), again with reference to uncertainty about the long-term consequences of those actions and values.

This pitfall relates to the point about dispositions and attitudes made above, in that the disposition to be willing to judge and oppose harmful actions and views plausibly has better overall consequences than a disposition to be meek and unwilling to take a strong stance against such things.

After all, while there is significant uncertainty about the long-term future, one can still make reasonable inferences about which broad directions we should ideally steer our civilization toward over the long term (e.g. toward showing concern for suffering in prudent yet morally serious ways). Utilitarians have reason to help steer the future in those directions, and to develop traits and attitudes that are commensurate with such directional changes. (See also “Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies”.)

Uncertainty-induced lack of moral drive

A related pitfall is uncertainty-induced lack of moral drive, whereby empirical uncertainty serves as a stumbling block to dedicated efforts to help others. This is probably also starkly suboptimal, for reasons similar to those outlined above: all things considered, it is probably ideal to develop a burning drive to help other sentient beings, despite uncertainty about long-term outcomes.

Perhaps the main difficulty in this respect is to know which particular project or aim is most important to work on. Yet a potential remedy to this problem (here conveyed in a short and crude fashion) might be to first make a dedicated effort toward the concrete goal of figuring out which projects or aims seem most worth pursuing — i.e. a broad and systematic search, informed by copious reading. And when one has eventually identified an aim or project that seems promising, it might be helpful to somewhat relax the “doubting modules” of our minds and to stick to that project for a while, pursuing the chosen aim with dedication (unless something clearly better comes up).

A more plausible approach

The previous sections have mostly pointed to suboptimal ways to approach utilitarian decision procedures. In this section, I want to briefly outline what I would consider a more defensible way to approach decision-making from a utilitarian perspective (whether one is a pure utilitarian or whether one merely includes a utilitarian component in one’s moral view).

I think two key facts must inform any plausible approach to utilitarian decision procedures:

  1. We have massive empirical uncertainty.
  2. We humans have a strong proclivity to deceive ourselves in self-serving ways.

These two observations carry significant implications. In short, they suggest that we should generally approach moral decisions with considerable humility, and with a strong sense of skepticism toward conclusions that are conveniently self-serving or low on integrity.

Given our massive uncertainty and our endlessly rationalizing minds, the ideal approach to utilitarian decision procedures is probably one that has a rather large distance between the initial question of “how to act” and the final decision to pursue a given action — at least when one is trying to calculate one’s way to an optimal decision (as opposed to when one is relying on commonly endorsed rules of thumb or intuitions). And this distance should probably be especially large if the decision that at first seems most recommendable is one that other moral views, along with common-sense intuitions, would deem profoundly wrong.

In other words, it seems that utilitarian decision procedures are best approached by assigning a fairly high prior to the judgments of other ethical views and common-sense moral intuitions (in terms of how plausible those judgments are from a utilitarian perspective), at least when these other views and intuitions converge strongly on a given conclusion. And it seems warranted to then be quite cautious and slow to update away from that prior, in part because of our massive uncertainty and our self-deceived minds. This is not to say that one could not end up with significant divergences relative to other widely endorsed moral views, but merely that such strong divergences probably need to be supported by a level of evidence that exceeds a rather high bar.

Likewise, it seems worth approaching utilitarian decision procedures with a prior that strongly favors actions of high integrity, not least because we should expect our rationalizing minds to be heavily biased toward low integrity — especially when nobody is looking.

Put briefly, it seems that a more defensible approach to utilitarian decision procedures would be animated by significant humility and would embody a strong inclination toward key virtues of integrity, kindness, honesty, etc., partly due to our strong tendency to excuse and rationalize deficiencies in these regards.

There are many studies that find a modest but significant association between proto-utilitarian judgments and the personality traits of psychopathy (impaired empathy) and Machiavellianism (manipulativeness and deceitfulness). (See Bartels & Pizarro, 2011; Koenigs et al., 2012; Gao & Tang, 2013; Djeriouat & Trémolière, 2014; Amiri & Behnezhad, 2017; Balash & Falkenbach, 2018; Karandikar et al., 2019; Halm & Möhring, 2019; Dinić et al., 2020; Bolelli, 2021; Luke & Gawronski, 2021; Schönegger, 2022.)

Specifically, the aspect of utilitarian judgment that seems most associated with psychopathy is the willingness to commit harm for the sake of the greater good, whereas endorsement of impartial beneficence — a core feature of utilitarianism and many other moral views — is associated with empathic concern, and is thus negatively associated with psychopathy (Kahane et al., 2018; Paruzel-Czachura & Farny, 2022). Another study likewise found that the connection between psychopathy and utilitarian moral judgments is in part explained by a reduced aversion to carrying out harmful acts (Patil, 2015).

Of course, whether a particular moral view, or a given feature of a moral view, is associated with certain undesirable personality traits by no means refutes that moral view. But the findings reviewed above might still be a cause for self-reflection among those of us who endorse or sympathize with some form of utilitarianism.

For example, maybe utilitarians are generally inclined to have fewer moral inhibitions compared to most people — e.g. because utilitarian reasoning might override intuitive judgments and norms, or because utilitarians are (perhaps) above average in trait Machiavellianism, in which case they might have fewer strongly felt moral inhibitions to overcome in the first place. And if utilitarians do tend to have fewer or weaker moral restraints of certain kinds, this could in turn dispose them to be less ethical in some respects, also by their own standards.

To be clear, this is all somewhat speculative. Yet, at the same time, these speculations are not wholly unmotivated. In terms of potential upshots, it seems that a utilitarian proneness to reduced moral restraint, if real, would give utilitarian actors additional reason to be skeptical of inclinations to disregard common moral inhibitions against harmful acts and low-integrity behavior. In short, it would give utilitarians even more reason to err on the side of integrity.


For helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

The catastrophic rise of insect farming and its implications for future efforts to reduce suffering

On the 17th of August 2021, the EU authorized the use of insects as feed for farmed animals such as chickens and pigs. This was a disastrous decision for sentient beings, as it may greatly increase the number of beings who will suffer in animal agriculture. Sadly, this was just one in a series of disastrous decisions that the EU has made regarding insect farming in the last couple of years. Most recently, in February 2022, they authorized the farming of house crickets for human consumption, after having made similar decisions for the farming of mealworms and migratory locusts in 2021.

Many such catastrophic decisions probably lie ahead, seeing that the EU is currently reviewing applications for the farming of nine additional kinds of insects. This brief posts reviews some reflections and potential lessons in light of these harmful legislative decisions.


  1. Could we have done better?
  2. How can we do better going forward?
    1. Questioning our emotions
    2. Seeing the connection between current institutional developments and s-risks
    3. Proactively searching for other important policy areas and interventions

Could we have done better?

The most relevant aspect to reflect upon in light of these legislative decisions are the strategic implications. Could we have done better? And what could we do better going forward?

I believe the short answer to the first question is a resounding “yes”. I believe that the animal movement could have made far greater efforts to prevent this development (which is not saying much, since the movement at large does not appear to have made much of an effort to prevent this disaster). I am not saying that such efforts would have prevented this development for sure, but I believe that they would have reduced the probability and expected scale of it considerably, to such an extent that it would have been worthwhile to pursue such preventive efforts.

In concrete terms, these efforts could have included:

  • Wealthy donors such as Open Philanthropy making significant donations toward preventing the expansion of insect farming (e.g. investing in research and campaigning work).
  • Animal advocates exploring and developing the many arguments for preventing such an expansion.
  • Animal advocates disseminating these arguments to the broader public, to fellow advocates, and to influential people and groups (e.g. politicians, and policy advisors).

Important to note is that efforts of this kind not only had the potential to change a few significant policy decisions, but they could potentially have prevented — or at least reduced — a whole cascade of harmful policy decisions. Of course, having such an impact might still be possible today, even if it is a lot more difficult at this later stage where the momentum for insect farming already appears strong and growing.

As Abraham Rowe put it, “not working on insect farming over the last decade may come to be one of the largest regrets of the EAA [Effective Animal Activist] community in the near future.” 

How can we do better going forward?

When asking how we can do better, I am particularly interested in what lessons we can draw in our efforts to reduce risks of astronomical future suffering (s-risks). After all, EU’s recent decisions to allow various kinds of insect farming will not only cause enormous amounts of suffering for insects in the near future, but they arguably also increase s-risks to a non-negligible extent, such as by (somewhat) increasing the probability that insects and other small beings will be harmed on an astronomical scale in the future.

So institutional decisions like these seem relevant for our efforts to reduce s-risks, and our failure to prevent these detrimental decisions can plausibly provide relevant lessons for our priorities and strategies going forward. (An implication of these harmful developments that I will not dive into here is that they give us further reason to be pessimistic about the future.)

The following are some of the lessons that tentatively stand out to me.

Questioning our emotions

I suspect that one of the main reasons that insect farming has been neglected by animal advocates is that it fails to appeal to our emotions. A number of factors plausibly contribute to this low level of emotional appeal. For instance, certain biases may prevent proper moral consideration for insects in general, and scope insensitivity may prevent us from appreciating the large number of insects who will suffer due to insect farming. (I strongly doubt that anyone is exempt from these biases, and I suspect that even people who are aware of them might still have neglected insect farming partly as a consequence of these biases.)

Additionally, we may have a bias to focus too narrowly on the suffering that is currently taking place rather than (also) looking ahead to consider how new harmful practices and sources of suffering might emerge, potentially on far greater scales than what we currently see. Reducing the risk of such novel atrocities occurring on a vast scale might not feel as important as does the reduction of existing forms of suffering. Yet the recent rise of insect farming, and the fact that we likely could have done effective work to prevent it, suggest that such feelings are not reliable.

Seeing the connection between current institutional developments and s-risks

When thinking about s-risks, it can be easy to fall victim to excessively abstract thinking and (what I have called) “long-term nebulousness bias” — i.e. a tendency to overlook concrete data and opportunities relevant to long-term influence. In particular, the abstract nature of s-risks may lead us to tacitly believe that good opportunities to influence policy (relative to s-risk reduction) can only be found well into the future, and to perhaps even assume that there is not much of significance that we can do to reduce s-risks at the policy level today

Yet I think the case of insect farming is a counterexample to such beliefs. To be clear, I am not saying that insect farming is necessarily the most promising policy area that we can focus on with respect to s-risk reduction. But it is plausibly a significant one, and one that those trying to reduce s-risks should arguably have paid more attention to in the past. And it still appears to merit greater focus today.

Proactively searching for other important policy areas and interventions

As hinted above, the catastrophic rise of insect farming is in some sense a proof of concept that there are policy decisions in the making that plausibly have a meaningful impact on s-risks. More than that, the case of insect farming might be an example where policy decisions in our time could be fairly pivotal — whether we see a ban on insect farming versus a rapidly unfolding cascade of policy decisions that swiftly expand insect farming might make a big difference, not least because such a cascade could leave us in a position where there is more institutional, financial, and value-related momentum in favor of insect farming (e.g. if massive industries with lobby influence have already emerged around it, and if most people already consider farmed insects an important part of their diet).

This suggests a critical lesson: those working to reduce s-risks have good reason to search for similar, potentially even more influential policy decisions that are being made today or in the near future. By analogy to how animal advocates likely could have made a significant difference (in expectation) if they had campaigned against the expansion of insect farming over the last decade, we may now do well by looking decades ahead, and considering which pivotal policy decisions that we might now be in a good position to influence. The need for such a proactive search effort could be the most important takeaway in light of this recent string of disastrous decisions.

Beware underestimating the probability of very bad outcomes: Historical examples against future optimism

It may be tempting to view history through a progressive lens that sees humanity as climbing toward ever greater moral progress and wisdom. As the famous quote popularized by Martin Luther King Jr. goes: “The arc of the moral universe is long, but it bends toward justice.”

Yet while we may hope that this is true, and do our best to increase the probability that it will be, we should also keep in mind that there are reasons to doubt this optimistic narrative. For some, the recent rise of right-wing populism is a salient reason to be less confident about humanity’s supposed path toward ever more compassionate and universal values. But it seems that we find even stronger reasons to be skeptical if we look further back in history. My aim in this post is to present a few historical examples that in my view speak against confident optimism regarding humanity’s future.


  1. Germany in year 1900
  2. Shantideva around year 700
  3. Lewis Gompertz and J. Howard Moore in the 19th century

Germany in year 1900

In 1900, Germany was far from being a paragon of moral advancement. They were a colonial power, antisemitism was widespread, and bigoted anti-Polish Germanisation policies were in effect. Yet Germany anno 1900 was nevertheless far from being like Germany anno 1939-1945, in which it was the main aggressor in the deadliest war in history and the perpetrator of the largest genocide in history.

In other words, Germany had undergone an extreme case of moral regress along various dimensions by 1942 (the year the so-called Final Solution was formulated and approved by the Nazi leadership) compared to 1900. And this development was not easy to predict in advance. Indeed, for historian of antisemitism Shulamit Volkov, a key question regarding the Holocaust is: “Why was it so hard to see the approaching disaster?”

If one had told the average German citizen in 1900 about the atrocities that their country would perpetrate four decades later, would they have believed it? What probability would they have assigned to the possibility that their country would commit atrocities on such a massive scale? I suspect it would be very low. They might not have seen more reason to expect such moral regress than we do today when we think of our future.

A lesson that we can draw from Germany’s past moral deterioration is, to paraphrase Volkov’s question, that approaching disasters can be hard to see in advance. And this lesson suggests that we should not be too confident as to whether we ourselves might currently be headed toward disasters that are difficult to see in advance.

Shantideva around year 700

Shantideva was a Buddhist monk who lived in ca. 685-763. He is best known as the author of A Guide to the Bodhisattva’s Way of Life, which is a remarkable text for its time. The core message is one of profound compassion for all sentient beings, and Shantideva not only describes such universally compassionate ideals, but he also presents stirring encouragements and cogent reasoning in favor of acting on those ideals.

That such a universally compassionate text existed at such an early time is a deeply encouraging fact in one sense. Yet in another sense, it is deeply discouraging. That is, when we think about all the suffering, wars, and atrocities that humanity has caused since Shantideva expounded these ideals — centuries upon centuries of brutal violence and torment imposed upon human and non-human beings — it seems that a certain pessimistic viewpoint gains support.

In particular, it seems that we should be pessimistic about notions along the lines of “compassionate ideals presented in a compelling way will eventually create a benevolent world”. After all, even today, 1300 years later, where we generally pride ourselves of being far more civilized and morally developed than our ancestors, we are still painfully far from observing the most basic of compassionate ideals in relation to other sentient beings.

Of course, one might think that the problem is merely that people have yet to be exposed to compassionate ideals such as those of Shantideva — or those of Mahavira or Mozi, both of whom lived more than a thousand years before Shantideva. But even if we grant that this is the main problem, it still seems that historical cases like these give us some reason to doubt whether most people ever will be exposed to such compassionate ideals, or whether most people would accept such ideals upon becoming exposed to them, let alone be willing to act on them. The fact that these memes have not caught on to a greater degree than they have, despite existing in such developed forms a long time ago, is some evidence that they are not nearly as virulent as many of us would have hoped.

Speaking for myself at least, I can say that I used to think that people just needed to be exposed to certain compassionate ideals and compassion-based arguments, and then they would change their minds and behaviors due to the sheer compelling nature of these ideals and arguments. But my experience over the years, e.g. with animal advocacy, have made me far more pessimistic about the force of such arguments. And the limited influence of highly sophisticated expositions of these ideals and arguments made many centuries ago is further evidence for that pessimism (relative to my previous expectations).

Of course, this is not to say that we can necessarily do better than to promote compassion-based ideals and arguments. It is merely to say that the best we can do might be a lot less significant — or be less likely to succeed — than what many of us had initially expected.

Lewis Gompertz and J. Howard Moore in the 19th century

Lewis Gompertz (ca. 1784-1861) and J. Howard Moore (1862-1916) both have a lot in common with Shantideva, as they likewise wrote about compassionate ethics relating to all sentient beings. (And all three of them touched on wild-animal suffering.) Yet Gompertz and Moore, along with other figures in the 19th century, wrote more explicitly about animal rights and moral vegetarianism than did Shantideva. Two observations seem noteworthy with regard to these writings.

One is that Gompertz and Moore both wrote about these topics before the rise of factory farming. That is, even though authors such as Gompertz and Moore made strong arguments against exploiting and killing other animals in the 19th century, humanity still went on to exploit and kill beings on a far greater scale than ever before in the 20th century, indeed on a scale that is still increasing today.

This may be a lesson for those who are working to reduce risks of astronomical suffering at present: even if you make convincing arguments against a moral atrocity that humanity is committing or otherwise heading toward, and even if you make these arguments at an early stage where the atrocity has yet to (fully) develop, this might still not be enough to prevent it from happening on a continuously expanding scale.

The second and closely related observation is that Gompertz and Moore both seem to have focused exclusively on animal exploitation as it existed in their own times. They did not appear to focus on preventing the problem from getting worse, even though one could argue, in hindsight, that such a strategy might have been more helpful overall.

Indeed, even though Moore’s outlook was quite pessimistic, he still seems to have been rather optimistic about the future. For instance, in the preface to his book The Universal Kinship (1906), he wrote: “The time will come when the sentiments of these pages will not be hailed by two or three, and ridiculed or ignored by the rest; they will represent Public Opinion and Law.”

Gompertz appeared similarly optimistic about the future, as he in his Moral Inquiries (1824, p. 48) wrote: “… though I cannot conceive how any person can shut his eyes to the general state of misery throughout the universe, I still think that it is for a wise purpose; that the evils of life, which could not properly be otherwise, will in the course of time be rectified …” None of them seem to have predicted that animal exploitation would be getting far worse in many ways (e.g. the horrible conditions of factory farms) or that it would increase vastly in scale.

This second observation might likewise carry lessons for animal activists and suffering reducers today. If these leading figures of 19th-century animal activism tacitly underestimated the risk that things might get far worse in the future, and as a result paid insufficient attention to such risks, could it be the case that most activists today are similarly underestimating and underprioritizing future risks of things getting even worse still? This question is at least worth pondering.

On a general and concluding note, it seems important to be aware of our tendencies to entertain wishful thinking and to be under the spell of the illusion of control. Just because a group of people have embraced some broadly compassionate values, and in turn identified ongoing atrocities and future risks based on those values, it does not mean that those people will be able to steer humanity’s future such that we avoid these atrocities and risks. The sad reality is that universally compassionate values are far from being in charge.

Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies

Our uncertainty about how the future will unfold is vast, especially on long timescales. In light of this uncertainty, it may be natural to think that our uncertainty about strategies must be equally vast and intractable. My aim in this brief post is to argue that this is not the case.


  1. Analogies to games, competitions, and projects
  2. Disanalogy in scope?
  3. Three robust strategies for reducing suffering
  4. The golden middle way: Avoiding overconfidence and passivity

Analogies to games, competitions, and projects

Perhaps the most intuitive way to see that vast outcome uncertainty need not imply vast strategic uncertainty is to consider games by analogy. Take chess as an example. It allows a staggering number of possible outcomes on the board, and chess players generally have great uncertainty about how a game of chess will unfold, even as they can make some informed predictions (similar to how we can make informed predictions in the real world).

Yet despite the great outcome uncertainty, there are still many strategies and rules of thumb that are robustly beneficial for increasing one’s chances of winning a game of chess. A trivially obvious one is to not lose pieces without good reason, yet seasoned chess players will know a long list of more advanced strategies and heuristics that tend to be beneficial in many different scenarios. (For an example of such a list, see e.g. here.)

Of course, chess is by no means the only example. Across a wide range of board games and video games, the same basic pattern is found: despite vast uncertainty about specific outcomes, there are clear heuristics and strategies that are robustly beneficial.

Indeed, this holds true in virtually any sphere of competition. Politicians cannot predict exactly how an election campaign will unfold, yet they can usually still identify helpful campaign strategies; athletes cannot predict how a given match will develop, yet they can still be reasonably confident about what constitutes good moves and game plans; companies cannot predict market dynamics in detail, yet they can still identify many objectives that would help them beat the competition (e.g. hire the best people and ensure high customer satisfaction).

The point also applies beyond the realm of competition. For instance, when engineers set out to build a big project, there are usually great uncertainties as to how the construction process is going to unfold and what challenges might come up. Yet they are generally still able to identify strategies that can address unforeseen challenges and get the job done. The same goes for just about any project, including cooperative projects between parties with different aims: detailed outcomes are exceedingly difficult to predict, yet it is generally (more) feasible to identify beneficial strategies.

Disanalogy in scope?

One might object that the examples above all involve rather narrow aims, and these aims differ greatly from impartial aims that relate to the interests of all sentient beings. This is a fair point, yet I do not think it undermines these analogies or the core point that they support.

Granted, when we move from narrower to broader aims and endeavors, our uncertainty about the relevant outcomes will tend to increase — e.g. when our aims involve far more beings and far greater spans of time. And when the outcome space and its associated uncertainty increases, we should also expect our strategic uncertainty to become greater. Yet it plausibly still holds true that we can identify at least some reasonably robust strategies, despite the increase in uncertainty that is associated with impartial aims. At the minimum, it seems plausible that our strategic uncertainty is still smaller than our outcome uncertainty.

After all, if such a pattern of lower strategic uncertainty holds true of a wide range of endeavors on a smaller scale, it seems reasonable to expect that it will apply on larger scales too. Besides, it appears that at least some of the examples mentioned in the previous section would still stand even if we greatly increased their scale. For example, in the case of many video games, it seems that we could increase the scale of the game by an arbitrary amount without meaningfully changing the most promising strategies — e.g. accumulate resources, gain more insights, strengthen your position. And similar strategies are plausibly quite robust relative to many goals in the real world as well, on virtually any scale.

Three robust strategies for reducing suffering

If we grant that we can identify some strategies that are robustly beneficial from an impartial perspective, this naturally raises the question as to what these strategies might be. The following are three examples of strategies for reducing suffering that seem especially robust and promising to me. (This is by no means an exhaustive list.)

  • Movement and capacity building: Expand the movement of people who strive to reduce suffering, and build a healthy and sustainable culture around this movement. Capacity building also includes efforts to increase the insights and resources available to the movement.
  • Promote concern for suffering: Increase the level of priority that people devote to the prevention of suffering, and increase the amount of resources that society devotes to its alleviation.
  • Promote cooperation: Increase society’s ability and willingness to engage in cooperative dialogues and positive-sum compromises that can help steer us away from bad outcomes.

The golden middle way: Avoiding overconfidence and passivity

To be clear, I do not mean to invite complacency about the risk that some apparently promising strategies could prove harmful. But I think it is worth keeping in mind that, just as there are costs associated with overconfidence, there are also costs associated with being too uncertain and too hesitant to act on the strategies that seem most promising. All in all, I think we have good reasons to pursue strategies such as those listed above, while still keeping in mind that we do face great strategic uncertainty.

What does a future dominated by AI imply?

Among altruists working to reduce risks of bad outcomes due to AI, I sometimes get the impression that there is a rather quick step from the premise “the future will be dominated by AI” to a practical position that roughly holds that “technical AI safety research aimed at reducing risks associated with fast takeoff scenarios is the best way to prevent bad AI outcomes”.

I am not saying that this is the most common view among those who work to prevent bad outcomes due to AI. Nor am I saying that the practical position outlined above is necessarily an unreasonable one. But I think I have seen (something like) this sentiment assumed often enough for it to be worthy of a critique. My aim in this post is to argue that there are many other practical positions that one could reasonably adopt based on that same starting premise.


  1. “A future dominated by AI” can mean many things
    1. “AI” can mean many things
    2. “Dominated by” can mean many things
    3. Combinations of many things
  2. Future AI dominance does not imply fast AI development
  3. Fast AI development does not imply concentrated AI development
  4. “A future dominated by AI” does not mean that either “technical AI safety” or “AI governance” is most promising
  5. Concluding clarification

“A future dominated by AI” can mean many things

“AI” can mean many things

It is worth noting that the premise that “the future will be dominated by AI” covers a wide range of scenarios. After all, it covers scenarios in which advanced machine learning software is in power; scenarios in which brain emulations are in power; as well as scenarios in which humans stay in power while gradually updating their brains with gene technologies, brain implants, nanobots, etc., such that their intelligence would eventually be considered (mostly) artificial intelligence by our standards. And there are surely more categories of AI than just the three broad ones outlined above.

“Dominated by” can mean many things

The words “in power” and “dominated by” can likewise mean many different things. For example, they could mean anything from “mostly in power” and “mostly dominated by” to “absolutely in power” and “absolutely dominated by”. And these respective terms cover a surprisingly wide spectrum.

After all, a government in a democratic society could reasonably be claimed to be “mostly in power” in that society, and a future AI system that is given similar levels of power could likewise be said to be “mostly in power” in the society it governs. By contrast, even the government of North Korea falls considerably short of being “absolutely in power” on a strong definition of that term, which hints at the wide spectrum of meanings covered by the general term “in power”.

Note that the contrast above actually hints at two distinct (though related) dimensions on which different meanings of “in power” can vary. One has to do with the level of power — i.e. whether one has more or less of it — while the other has to do with how the power is exercised, e.g. whether it is democratic or totalitarian in nature.

Thus, “a future society with AI in power” could mean a future in which AI possesses most of the power in a democratically elected government, or it could mean a future in which AI possesses total power with no bounds except the limits of physics.

Combinations of many things

Lastly, we can make a combinatorial extension of the points made above. That is, we should be aware that “a future dominated by AI” could — and is perhaps likely to — combine different kinds of AI. For instance, one could imagine futures that contain significant numbers of AIs from each of the three broad categories of AI mentioned above.

Additionally, these AIs could exercise power in distinct ways and in varying degrees across different parts of the world. For example, some parts of the world might make decisions in ways that resemble modern democratic processes, with power distributed among many actors, while other parts of the world might make decisions in ways that resemble autocratic decision procedures.

Such a diversity of power structures and decision procedures may be especially likely in scenarios that involve large-scale space expansion, since different parts of the world would then eventually be causally disconnected, and since a larger volume of AI systems presumably renders greater variation more likely in general.

These points hint at the truly vast space of possible futures covered by a term such as “a future dominated by AI”.

Future AI dominance does not imply fast AI development

Another conceptual point is that “a future dominated by AI” does not imply that technological or social progress toward such a future will happen soon or that it will occur suddenly. Furthermore, I think one could reasonably argue that such an imminent or sudden change is quite unlikely (though it obviously becomes more likely the broader our conception of “a future dominated by AI” is).

An elaborate justification for my low credence in such sudden change is beyond the scope of this post, though I can at least note that part of the reason for my skepticism is that I think trends and projections in both computer hardware and economic growth speak against such rapid future change. (For more reasons to be skeptical, see Reflections on Intelligence and “A Contra AI FOOM Reading List”.)

A future dominated by AI could emerge through a very gradual process that occurs over many decades or even hundreds of years (conditional on it ever happening). And AI scenarios involving such gradual development could well be both highly likely and highly consequential.

An objection against focusing on such slow-growth scenarios might be that scenarios involving rapid change have higher stakes, and hence they are more worth prioritizing. But it is not clear to me why this should be the case. As I have noted elsewhere, a so-called value lock-in could also happen in a slow-growth scenario, and the probability of success — and of avoiding accidental harm — may well be higher in slow-growth scenarios (cf. “Which World Gets Saved”).

The upshot could thus be the very opposite, namely that it is ultimately more promising to focus on scenarios with relatively steady growth in AI capabilities and power. (I am not claiming that this focus is in fact more promising; my point is simply that it is not obvious and that there are good reasons to question a strong focus on fast-growth scenarios.)

Fast AI development does not imply concentrated AI development

Likewise, even if we grant that the pace of AI development will increase rapidly, it does not follow that this growth will be concentrated in a single (or a few) AI system(s), as opposed to being widely distributed, akin to an entire economy of machines that grow fast together. This issue of centralized versus distributed growth was in fact the main point of contention in the Hanson-Yudkowsky FOOM debate; and I agree with Hanson that distributed growth is considerably more likely.

Similar to the argument outlined in the previous section, one could argue that there is a wager to focus on scenarios that entail highly concentrated growth over those that involve highly distributed growth, even if the latter may be more likely. Perhaps the main argument in favor of this view is that it seems that our impact can be much greater if we manage to influence a single system that will eventually gain power compared to if our influence is dispersed across countless systems.

Yet I think there are good reasons to doubt that argument. One reason is that the strategy of influencing such a single AI system may require us to identify that system in advance, which might be a difficult bet that we could easily get wrong. In other words, our expected influence may be greatly reduced by the risk that we are wrong about which systems are most likely to gain power. Moreover, there might be similar and ultimately more promising levers for “concentrated influence” in scenarios that involve more distributed growth and power. Such levers may include formal institutions and societal values, both of which could exert a significant influence on the decisions of a large number of agents simultaneously — by affecting the norms, laws, and social equilibria under which they interact.

“A future dominated by AI” does not mean that either “technical AI safety” or “AI governance” is most promising

Another impression I have is that we sometimes tacitly assume that work on “avoiding bad AI outcomes” will fall either in the categories of “technical AI safety” or “AI governance”, or at least that it will mostly fall within these categories. But I do not think that this is the case, partly for the reasons alluded to above.

In particular, it seems to me that we sometimes assume that the aim of influencing “AI outcomes” is necessarily best pursued in ways that pertain quite directly to AI today. Yet why should we assume this to be the case? After all, it seems that there are many plausible alternatives.

For example, one could think that it is generally better to pursue broad investments so as to build flexible resources that make us better able to tackle these problems down the line — e.g. investments toward general movement building and toward increasing the amount of money that we will be able to spend later, when we might be better informed and have better opportunities to pursue direct work.

A complementary option is to focus on the broader contextual factors hinted at in the previous section. That is, rather than focusing primarily on the design of the AI systems themselves, or on the laws that directly govern their development, one may focus on influencing the wider context in which they will be developed and deployed — e.g. general values, institutions, diplomatic relations, collective knowledge and wisdom, etc. After all, the broader context in which AI systems will be developed and put into action could well prove critical to the outcomes that future AI systems will eventually create.

Note that I am by no means saying that work on technical AI safety or AI governance is not worth pursuing. My point is merely that these other strategies focused on building flexible resources and influencing broader contextual factors should not be overlooked as ways to influence “a future dominated by AI”. Indeed, I believe that these strategies are among the most promising ways in which we can have a beneficial such influence at this point.

Concluding clarification

On a final note, I should clarify that the main conceptual points I have been trying to make in this post likely do not contradict the explicitly endorsed views of anyone who works to reduce risks from AI. The objects of my concern are more (what I perceive to be) certain implicit models and commonly employed terminologies that I worry may distort how we think and talk about these issues.

Specifically, it seems to me that there might be a sort of collective availability heuristic at work, through which we continually boost the salience of a particular AI narrative — or a certain class of AI scenarios — along with a certain terminology that has come to be associated with that narrative (e.g. ‘AI takeoff’, ‘transformative AI’, etc). Yet if we change our assumptions a bit, or replace the most salient narrative with another plausible one, we might find that this terminology does not necessarily make a lot of sense anymore. We might find that our typical ways of thinking about AI outcomes may be resting on a lot of implicit assumptions that are more questionable and more narrow than we tend to realize.

Reasons to include insects in animal advocacy

I have seen some people claim that animal activists should primarily be concerned with certain groups of numerous vertebrates, such as chickens and fish, whereas we should not be concerned much, if at all, with insects and other small invertebrates. (See e.g. here.) I think there are indeed good arguments in favor of emphasizing chickens and fish in animal advocacy, yet I think those same arguments tend to support a strong emphasis on helping insects as well. My aim in this post is to argue that we have compelling reasons to include insects and other small vertebrates in animal advocacy.


  1. A simplistic sequence argument: Smaller beings in increasingly large numbers
    1. The sequence
    2. Why stop at chickens or fish?
  2. Invertebrate vs. vertebrate nervous systems
    1. Phylogenetic distance
    2. Behavioral and neurological evidence
    3. Nematodes and extended sequences
  3. Objection based on appalling treatment
  4. Potential biases
    1. Inconvenience bias
    2. Smallness bias
    3. Disgust and fear reflexes
    4. Momentum/status quo bias
  5. Other reasons to focus more on small invertebrates
    1. Neglectedness
    2. Opening people’s eyes to the extent of suffering and harmful decisions
    3. Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion
    4. Qualifications and counter-considerations
  6. My own view on strategy in brief
  7. Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering
  8. Acknowledgments

A simplistic sequence argument: Smaller beings in increasingly large numbers

As a preliminary motivation for the discussion, it may be helpful to consider the sequence below.

I should first of all clarify what I am not claiming in light of the following sequence. I am not making any claims about the moral relevance of the neuron counts of individual beings or groups of beings (that is a complicated issue that defies simple answers). Nor am I claiming that we should focus mostly on helping beings such as land arthropods and nematodes. The claim I want to advance is a much weaker one, namely that, in light of the sequence below, it is hardly obvious that we should focus mostly on helping chickens or fish.

The sequence

At any given time, there are roughly:

  • 780 million farmed pigs, with an estimated average neuron count of 2.2 billion. Total neuron count: ~1.7 * 10^18.
  • 33 billion farmed chickens, with an estimated average neuron count of 200 million. Total neuron count: ~6.6 * 10^18.
  • 10^15 fish (the vast majority of whom are wild fish), with an estimated average neuron count of 1 million neurons (this number lies between the estimated neuron count of a larval zebrafish and an adult zebrafish; note that there is great uncertainty in all these estimates). Total neuron count: ~10^21. It is estimated that humanity kills more than a trillion fish a year, and if we assume that they likewise have an average neuron count of around 1 million neurons, the total neuron count of these beings is ~10^18.
  • 10^19 land arthropods, with an estimated average neuron count of 15,000 neurons (some insects have brains with more than a million neurons, but most arthropods appear to have considerably fewer neurons). Total neuron count: ~1.5*10^23. If humanity kills roughly the same proportion of land arthropods as the proportion of fish that we kill (e.g. through insecticides and insect farming), then the total neuron count of the land arthropods we kill is ~10^20.
  • 10^21 nematodes, with an estimated average neuron count of 300 neurons. Total neuron count: ~3 * 10^23.

Why stop at chickens or fish?

The main argument that supports a strong emphasis on chickens or fish is presumably their large numbers (as well as their poor treatment, which I discuss below). Yet the numbers-based argument that supports a strong emphasis on chickens and fish could potentially also support a strong emphasis on small invertebrates such as insects. It is thus not clear why we should place a strict boundary right below chickens or fish beyond which this numbers-based argument no longer applies. After all, each step of this sequence entails a similar pattern in terms of crude numbers: we have individual beings who on average have 1-3 orders of magnitude fewer neurons yet who are 1-5 orders of magnitude more numerous than the beings in the previous step.

Invertebrate vs. vertebrate nervous systems

A defense that one could give in favor of placing a relatively strict boundary below fish is that we here go from vertebrates to invertebrates, and we can be significantly less sure that invertebrates suffer compared to vertebrates.

Perhaps this defense has some force. But how much? Our confidence that the beings in this sequence have the capacity to suffer should arguably decrease at least somewhat in each successive step, yet should the decrease in confidence from fish to insects really be that much bigger than in the previous steps?

Phylogenetic distance

Based on the knowledge that we ourselves can suffer, one might think that a group of beings’ phylogenetic distance from us (i.e. how distantly related they are to us) can provide a tentative prior as to whether those beings can suffer, and regarding how big a jump in confidence we should make for different kinds of beings. Yet phylogenetic distance per se arguably does not support a substantially greater decrease in confidence in the step from fish to insects compared to the previous steps in the sequence above. 

The last common ancestor of humans and insects appears to have lived around 575 million years ago, whereas the last common ancestor of humans and fish lived around 400-485 million years ago (depending on the species of fish; around 420-460 million years for the most numerous fish). By comparison, the last common ancestor of humans and chickens lived around 300 million years ago, while the last common ancestor of humans and pigs lived around 100-125 million years ago.

Thus, when we look at different beings’ phylogenetic distance from humans in these temporal terms, it does not seem that the step between fish and insects (in the sequence above) is much larger than the step between fish and chickens or between chickens and pigs. In each case, the increase in the “distance” appears to be something like 100-200 million years.

Behavioral and neurological evidence

Of course, “phylogenetic distance from humans” does not represent strong evidence as to whether a group of beings has the capacity to suffer. After all, humans are more closely related to starfish (~100 neurons) than to octopuses (~500 million neurons), and we have much stronger reasons to think that the latter can suffer, based on behavioral and neurological evidence (cf. the Cambridge Declaration on Consciousness).

Does such behavioral and neurological evidence support a uniquely sharp drop in confidence regarding insect sentience compared to fish sentience? Arguably not, as there is mounting evidence of pain in (small) invertebrates, both in terms of behavioral and neuroscientific evidence. Additionally, there are various commonalities in the respective structures and developments of arthropod and vertebrate brains.

In light of this evidence, it seems that a sharp drop in confidence regarding pain in insects (versus pain in fish) requires a justification.

Nematodes and extended sequences

I believe that a stronger decrease in confidence is warranted when comparing arthropods and nematodes, for a variety of reasons: the nematode nervous system consists primarily of a so-called nerve ring, which is quite distinct from the brains of arthropods, and unlike the neurons of arthropods (and other animals), nematode neurons do not have action potentials or orthologs of sodium-channels (e.g. Nav1 and Nav2), which appear to play critical roles to pain signaling in other animals.

However, the evidence of pain in nematodes should not be understated either. The probability of pain in nematodes still seems non-negligible, and it arguably justifies substantial concern for (the risk of) nematode pain, even if it does not overall warrant a similarly strong concern and priority as does the suffering of chickens, fish, and arthropods.

This discussion also hints at why the sequence argument above need not imply that we should primarily focus on risks of suffering in bacteria or atoms, as one may reasonably hold that the probability of such suffering decreases by a greater rate than the number of the purported sufferers increases in such extended sequences.

Objection based on appalling treatment

Another reason one could give in favor of focusing on chickens and fish is that they are treated in particularly appalling ways, e.g. they are often crammed in extremely small spaces and killed in horrific ways. I agree that humanity’s abhorrent treatment of chickens and fish is a strong additional reason to prioritize helping them. Yet it seems that this same argument also favors a focus on insects.

After all, humanity poisons vast numbers of insects with insecticides that may cause intensely painful deaths, and in various insect farming practices — which are sadly growing — insects are commonly boiled, fried, or roasted alive. These practices seem no less cruel and appalling than the ways in which we treat and kill chickens and fish.

Potential biases

There are many reasons to expect that we are biased against giving adequate moral consideration to small invertebrates such as insects (in addition to our general speciesist bias). The four plausible biases listed below are by no means exhaustive.

Inconvenience bias

It is highly inconvenient if insects can feel pain, as it would imply that 1) we should be concerned about far more beings, which greatly complicates our ethical and strategic considerations (compared to if we just focused on vertebrates); 2) the extent of pain and suffering in the world is far greater than we would otherwise have thought, which may be a painful conclusion to accept; and 3) we should take far greater care not to harm insects in our everyday lives. All these inconveniences likely motivate us to conclude that insects are not sentient or that they are not that important in the bigger picture.

Smallness bias

Insects tend to be rather small, even compared to fish, which might make us reluctant to grant them moral consideration. In other words, our intuitions plausibly display a general sizeist bias. As a case in point, ants have more than twice as many neurons as lobsters, and there does not seem to be any clear reason to think that ants are less able to feel pain than are lobsters. Yet ants are obviously much smaller than lobsters, which may explain why people seem to show considerably more concern for lobsters than for ants, and why the number of people who believe that lobsters can feel pain (more than 80 percent in a UK survey) is significantly larger than the number of people who believe that ants can feel pain (around 56 percent). Of course, this pattern may also be partially explained by the inconvenience bias, since the acceptance of pain in lobsters seems less inconvenient than does the acceptance of pain in ants; but size likely still plays a significant role. (See also Vinding, 2015, “A Short Note on Insects”.)

Disgust and fear reflexes

It seems that many people have strong disgust reactions to (at least many) small invertebrates, such as cockroaches, maggots, and spiders. Some people may also feel fear toward these animals, or at least feel that they are nuisance. Gut reactions of this kind may well influence our moral evaluations of small invertebrates in general, even though they ideally should not.

Momentum/status quo bias

The animal movement has not historically focused on invertebrates, and hence there is little momentum in favor of focusing on their plight. That is, our status quo bias seems to favor a focus on helping the vertebrates whom the animal movement have traditionally focused on. To be sure, status quo bias also works against concern for fish and chickens to some degree (which is worth controlling for as well), yet chickens and fish have still received considerably more focus from the animal movement, and hence status quo bias likely negates concern for insects to an even stronger extent.

These biases should give us pause when we are tempted to reflexively dismiss the suffering of small invertebrates.

Other reasons to focus more on small invertebrates

In addition to the large number of arthropods and the evidence for arthropod pain, what other reasons might support a greater focus on small invertebrates?


An obvious reason is the neglect of these beings. As hinted in the previous section, a focus on helping small invertebrates has little historical momentum, and it is still extremely neglected in the broader animal movement today. This seems to me a fairly strong reason to focus more on invertebrates on the margin, or at the very least to firmly include invertebrates in one’s advocacy.

Opening people’s eyes to the extent of suffering and harmful decisions

Another, perhaps less obvious reason is that concern for smaller beings such as insects might help reduce risks of astronomical suffering. This claim should immediately raise some concerns about suspicious convergence, and as I have argued elsewhere, there is indeed a real risk that expanding the moral circle could increase rather than reduce future suffering. Partly for this reason, it might be better to promote a deeper concern for suffering than to promote wider moral circles (see also Vinding, 2020, ch. 12).

Yet that being said, I also think there is a sense in which wider moral circles can help promote a deeper concern for suffering, and not least give people a more realistic picture of the extent of suffering in the world. Simply put, a moral outlook that includes other vertebrates besides humans will see far more severe suffering and struggle in the world, and a perspective that also includes invertebrates will see even more suffering still. Indeed, not only does such an outlook open one’s eyes to more existing suffering, but it may also open one’s eyes (more fully) to humanity’s capacity to ignore suffering and to make decisions that actively increase it, even today.

Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion

Another way in which greater concern for invertebrate suffering might reduce risks of astronomical suffering is that small invertebrates seem to be among the animals who are most likely to be sent into space on a large scale in the future (e.g. because they may survive better in extreme environments). Indeed some invertebrates — including fruit flies, crickets, and wasps — have already been sent into space, and some tardigrades were even sent to the moon (though the spacecraft crashed and probably none survived). Hence, the risk of spreading animals to space plausibly gives us additional reason to include insects in animal advocacy.

Qualifications and counter-considerations

To be clear, the considerations reviewed above merely push toward increasing the emphasis that we place on small beings such as insects — they are not necessarily decisive reasons to give primary focus to those beings. In particular, these arguments do not make a case for focusing on helping insects over, say, new kinds of beings who might be created in the future in even larger numbers.

It is also worth noting that there may be countervailing reasons not to emphasize insects more. One is that it could risk turning people away from the plight of non-human animals and the horror of suffering, which many people might find difficult to relate to if insect suffering constitutes the main focus at a practical level. This may be a reason to favor a greater focus on the suffering of larger and (for most people) more relatable animals.

I think the considerations on both sides need to be taken into account, including considerations about future beings who may become even more numerous and more neglected than insects. The upshot, to my mind, is that while focusing primarily on helping insects is probably not the best way to reduce suffering (for most of us), it still seems likely that 1) promoting greater concern for insects, as well as 2) promoting concrete policies that help insects, both constitute a significant part of the optimal portfolio of aims to push for.

My own view on strategy in brief

While questions about which beings seem most worth helping (on the margin) can be highly relevant for many of our decisions, there are also many strategic decisions that do not depend critically on how we answer these questions.

Indeed, my own view on strategies for reducing animal suffering is that we generally do best by pursuing robust and broad strategies that help many beings simultaneously, without focusing too narrowly on any single group of beings. (Though as hinted above, I think there are many situations where it makes sense to focus on interventions that help specific groups of beings.)

This is one of the reasons why I tend to favor an antispeciesist approach to animal advocacy, with a particular emphasis on the importance of suffering. Such an approach is still compatible with highlighting the scale and neglectedness of the suffering of chickens, fish, and insects, as well as the scale and neglectedness of wild-animal suffering. That is, a general approach thoroughly “scope-informed” about the realities on the ground.

And such a comprehensive approach seems further supported when we consider risks of astronomical suffering (despite the potential drawbacks alluded to earlier). In particular, when trying to help other animals today, it is worth asking how our efforts might be able to help future beings as well, since failing to do so could be a lost opportunity to spare large numbers of beings from suffering. (For elaboration, see “How the animal movement could do even more good” and Vinding, 2022, sec. 10.8-10.9.)

Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering

An objection against numbers-based arguments for focusing more on insects is that small pains, or a high probability of small pains, cannot be aggregated to be worse than extreme suffering.

I agree with the view that small pains do not add up to be worse than extreme suffering, yet I think it is mistaken to think that this view undermines any numbers-based argument for emphasizing insects more in animal advocacy. The reason, in short, is that we should also assign some non-negligible probability to the possibility that insects experience extreme suffering (e.g. in light of the evidence for pain in insects cited above). And this probability, combined with the very large number of insects, implies that there are many instances of extreme suffering occurring among insects in expectation. After all, the vast number of insects should lead us to believe that there are many beings who have experiences at the (expected) tail-end of the very worst experiences that insects can have.

As a concluding thought experiment that may challenge comfortable notions regarding the impossibility of intense pain among insects, consider that you were given the choice between A) living as a chicken inside a tiny battery cage for a full day, or B) being continually born and reborn as an insect who has the experience of being burned or crushed alive, for a total of a million days (for concreteness, you may imagine that you will be reborn as a butterfly like the one pictured at the top of this post).

If we were really given this choice, I doubt that we would consider it an easy choice in favor of B. I doubt that we would dismiss the seriousness of the worst insect suffering.


For their helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

We Should Expect to Be Extremely Biased About Speciesism

The following is a slightly edited excerpt from my book Effective Altruism: How Can We Best Help Others? (2018/2022).

If ever there were a bias that we evolved not to transcend, it is surely our speciesist bias. After all, we evolved in a context in which our survival depended on our killing and eating non-human beings. For most of our evolutionary history, the questioning of such a practice, and the belief that non-human beings should be taken seriously in moral terms, meant a radically decreased probability of survival and reproduction. And this would likely also apply to one’s entire tribe if one were to start spreading such a sentiment, which might help explain the visceral threat that many people seem to feel upon engaging with supporters of this sentiment today. In other words, having significant moral concern for non-human beings was probably not a recipe for survival in our evolutionary past. It was more like a death sentence. For this reason alone, we should expect to be extremely biased on this matter.

And yet this evolutionary tale is far from the full story, as there is also a cultural story to be told, which provides even more reasons to expect our outlook to be intensely biased. For on top of our (likely) speciesist biological hardware, we also have the software of cultural programming running, and it runs the ultimate propaganda campaign against concern for non-human beings. Indeed, if we ran just a remotely similar campaign against humans, we would consider the term “propaganda” a gross understatement. Their body parts are for sale at every supermarket and on the menu for virtually every meal; their skin is ripped off their bodies and used for handbags, shoes, and sports equipment; their names are used pejoratively in every other sentence. Why, indeed, would anyone expect this to leave our moral cognition with respect to these beings biased in the least? Or rather, why should we expect to stand any chance whatsoever of having just a single rational thought about the moral status of these beings? Well, we arguably shouldn’t — not without immense amounts of effort spent rebelling against our crude nature and the atrocious culture that it has spawned.

Another bias that is relevant to consider, on top of the preceding considerations, is that human altruism tends to be motivated by a hidden drive to show others how cool and likable we are, and to increase our own social status. To think that we transcend this motive merely by calling ourselves “effective altruists” would be naive. The problem, then, is that rejecting speciesism and taking the implications of such a rejection seriously is, sadly, seen as quite uncool at this point. If one were to do so, one would become more than a little obnoxious and unlikeable in the eyes of most people, and be more like a favorite object of ridicule than of admiration, none of which is enticing for social creatures like us. So even if reason unanimously says that we should reject speciesism, we have a thousand and one social reasons that say just the opposite.

There are also psychological studies that demonstrate the existence of strong biases in our views of non-human individuals, such as that we “value individuals of certain species less than others even when beliefs about intelligence and sentience are accounted for”. More than that, we deny the mental capacities of the kinds of beings whom we consider food — a denial that is increased by “expectations regarding the immediate consumption” of such beings.

These forms of bias should give us pause and should encourage serious reflection. The way forward, it seems, is to admit that we are extremely biased and to commit to doing better.

Far From Omelas

The following is a slightly edited excerpt from my book Effective Altruism: How Can We Best Help Others? (2018/2022).

I should like to re-emphasize a tragic fact that is all too easily forgotten by our wishful and optimistic minds, that fact being that the world we inhabit is hopelessly far from Omelas. For our world is unfortunately nothing like a near-paradisiacal city predicated on the misery of a single child. Rather, in our world, there are millions of starving children, and millions of children who die from such starvation or otherwise readily preventable causes, every single year. And none of this misery serves to support a paradise or anything close to it.

We do not live in a world where a starving child confined to a basement is anywhere near the worst forms of suffering that exist. Sadly, our world contains an incomprehensibly larger number of horrors of incomprehensibly greater severity, forms of suffering that make the sufferer wish dearly for a fate as “lucky” as that of the unfortunate child in Omelas. This is, of course, true even if we only consider the human realm, yet it is even more true if we also, as we must, consider the realm of non-human individuals.

Humanity subjects billions of land-living beings to conditions similar to those of the child in Omelas, and we inflict extreme suffering upon a significant fraction of them, by castrating them without anesthetics, boiling them alive, suffocating them, grinding them alive, etc. And our sins toward aquatic animals are greater still, as we kill them in far greater numbers, trillions on some estimates; and most tragically, these deaths probably involve extreme suffering more often than not, as we slowly drag these beings out of the deep, suffocate them, and cut off their heads without stunning or mercy. And yet even this horror story of unfathomable scale still falls hopelessly short of capturing the true extent of suffering in the world, as the suffering created by humanity only represents a fraction of the totality of suffering on the planet. The vast majority of this suffering is found in the wild, where non-human animals suffer and die from starvation, parasitism, and disease, not to mention being eaten alive, which is a source of extreme suffering for countless beings on the planet every single second.

Sadly, our condition is very far from Omelas, implying that if one would choose to walk away from Omelas, it seems impossible to defend supporting the spread of our condition, or anything remotely like it, beyond Earth. The extent of suffering in the world is immense and overshadowing, and our future priorities should reflect this reality.

Blog at WordPress.com.

Up ↑