Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


Contents

  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part; we find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

The catastrophic rise of insect farming and its implications for future efforts to reduce suffering

On the 17th of August 2021, the EU authorized the use of insects as feed for farmed animals such as chickens and pigs. This was a disastrous decision for sentient beings, as it may greatly increase the number of beings who will suffer in animal agriculture. Sadly, this was just one in a series of disastrous decisions that the EU has made regarding insect farming in the last couple of years. Most recently, in February 2022, they authorized the farming of house crickets for human consumption, after having made similar decisions for the farming of mealworms and migratory locusts in 2021.

Many such catastrophic decisions probably lie ahead, seeing that the EU is currently reviewing applications for the farming of nine additional kinds of insects. This brief posts reviews some reflections and potential lessons in light of these harmful legislative decisions.


Contents

  1. Could we have done better?
  2. How can we do better going forward?
    1. Questioning our emotions
    2. Seeing the connection between current institutional developments and s-risks
    3. Proactively searching for other important policy areas and interventions

Could we have done better?

The most relevant aspect to reflect upon in light of these legislative decisions are the strategic implications. Could we have done better? And what could we do better going forward?

I believe the short answer to the first question is a resounding “yes”. I believe that the animal movement could have made far greater efforts to prevent this development (which is not saying much, since the movement at large does not appear to have made much of an effort to prevent this disaster). I am not saying that such efforts would have prevented this development for sure, but I believe that they would have reduced the probability and expected scale of it considerably, to such an extent that it would have been worthwhile to pursue such preventive efforts.

In concrete terms, these efforts could have included:

  • Wealthy donors such as Open Philanthropy making significant donations toward preventing the expansion of insect farming (e.g. investing in research and campaigning work).
  • Animal advocates exploring and developing the many arguments for preventing such an expansion.
  • Animal advocates disseminating these arguments to the broader public, to fellow advocates, and to influential people and groups (e.g. politicians, and policy advisors).

Important to note is that efforts of this kind not only had the potential to change a few significant policy decisions, but they could potentially have prevented — or at least reduced — a whole cascade of harmful policy decisions. Of course, having such an impact might still be possible today, even if it is a lot more difficult at this later stage where the momentum for insect farming already appears strong and growing.

As Abraham Rowe put it, “not working on insect farming over the last decade may come to be one of the largest regrets of the EAA [Effective Animal Activist] community in the near future.” 

How can we do better going forward?

When asking how we can do better, I am particularly interested in what lessons we can draw in our efforts to reduce risks of astronomical future suffering (s-risks). After all, EU’s recent decisions to allow various kinds of insect farming will not only cause enormous amounts of suffering for insects in the near future, but they arguably also increase s-risks to a non-negligible extent, such as by (somewhat) increasing the probability that insects and other small beings will be harmed on an astronomical scale in the future.

So institutional decisions like these seem relevant for our efforts to reduce s-risks, and our failure to prevent these detrimental decisions can plausibly provide relevant lessons for our priorities and strategies going forward. (An implication of these harmful developments that I will not dive into here is that they give us further reason to be pessimistic about the future.)

The following are some of the lessons that tentatively stand out to me.

Questioning our emotions

I suspect that one of the main reasons that insect farming has been neglected by animal advocates is that it fails to appeal to our emotions. A number of factors plausibly contribute to this low level of emotional appeal. For instance, certain biases may prevent proper moral consideration for insects in general, and scope insensitivity may prevent us from appreciating the large number of insects who will suffer due to insect farming. (I strongly doubt that anyone is exempt from these biases, and I suspect that even people who are aware of them might still have neglected insect farming partly as a consequence of these biases.)

Additionally, we may have a bias to focus too narrowly on the suffering that is currently taking place rather than (also) looking ahead to consider how new harmful practices and sources of suffering might emerge, potentially on far greater scales than what we currently see. Reducing the risk of such novel atrocities occurring on a vast scale might not feel as important as does the reduction of existing forms of suffering. Yet the recent rise of insect farming, and the fact that we likely could have done effective work to prevent it, suggest that such feelings are not reliable.

Seeing the connection between current institutional developments and s-risks

When thinking about s-risks, it can be easy to fall victim to excessively abstract thinking and (what I have called) “long-term nebulousness bias” — i.e. a tendency to overlook concrete data and opportunities relevant to long-term influence. In particular, the abstract nature of s-risks may lead us to tacitly believe that good opportunities to influence policy (relative to s-risk reduction) can only be found well into the future, and to perhaps even assume that there is not much of significance that we can do to reduce s-risks at the policy level today

Yet I think the case of insect farming is a counterexample to such beliefs. To be clear, I am not saying that insect farming is necessarily the most promising policy area that we can focus on with respect to s-risk reduction. But it is plausibly a significant one, and one that those trying to reduce s-risks should arguably have paid more attention to in the past. And it still appears to merit greater focus today.

Proactively searching for other important policy areas and interventions

As hinted above, the catastrophic rise of insect farming is in some sense a proof of concept that there are policy decisions in the making that plausibly have a meaningful impact on s-risks. More than that, the case of insect farming might be an example where policy decisions in our time could be fairly pivotal — whether we see a ban on insect farming versus a rapidly unfolding cascade of policy decisions that swiftly expand insect farming might make a big difference, not least because such a cascade could leave us in a position where there is more institutional, financial, and value-related momentum in favor of insect farming (e.g. if massive industries with lobby influence have already emerged around it, and if most people already consider farmed insects an important part of their diet).

This suggests a critical lesson: those working to reduce s-risks have good reason to search for similar, potentially even more influential policy decisions that are being made today or in the near future. By analogy to how animal advocates likely could have made a significant difference (in expectation) if they had campaigned against the expansion of insect farming over the last decade, we may now do well by looking decades ahead, and considering which pivotal policy decisions that we might now be in a good position to influence. The need for such a proactive search effort could be the most important takeaway in light of this recent string of disastrous decisions.

Beware underestimating the probability of very bad outcomes: Historical examples against future optimism

It may be tempting to view history through a progressive lens that sees humanity as climbing toward ever greater moral progress and wisdom. As the famous quote popularized by Martin Luther King Jr. goes: “The arc of the moral universe is long, but it bends toward justice.”

Yet while we may hope that this is true, and do our best to increase the probability that it will be, we should also keep in mind that there are reasons to doubt this optimistic narrative. For some, the recent rise of right-wing populism is a salient reason to be less confident about humanity’s supposed path toward ever more compassionate and universal values. But it seems that we find even stronger reasons to be skeptical if we look further back in history. My aim in this post is to present a few historical examples that in my view speak against confident optimism regarding humanity’s future.


Contents

  1. Germany in year 1900
  2. Shantideva around year 700
  3. Lewis Gompertz and J. Howard Moore in the 19th century

Germany in year 1900

In 1900, Germany was far from being a paragon of moral advancement. They were a colonial power, antisemitism was widespread, and bigoted anti-Polish Germanisation policies were in effect. Yet Germany anno 1900 was nevertheless far from being like Germany anno 1939-1945, in which it was the main aggressor in the deadliest war in history and the perpetrator of the largest genocide in history.

In other words, Germany had undergone an extreme case of moral regress along various dimensions by 1942 (the year the so-called Final Solution was formulated and approved by the Nazi leadership) compared to 1900. And this development was not easy to predict in advance. Indeed, for historian of antisemitism Shulamit Volkov, a key question regarding the Holocaust is: “Why was it so hard to see the approaching disaster?”

If one had told the average German citizen in 1900 about the atrocities that their country would perpetrate four decades later, would they have believed it? What probability would they have assigned to the possibility that their country would commit atrocities on such a massive scale? I suspect it would be very low. They might not have seen more reason to expect such moral regress than we do today when we think of our future.

A lesson that we can draw from Germany’s past moral deterioration is, to paraphrase Volkov’s question, that approaching disasters can be hard to see in advance. And this lesson suggests that we should not be too confident as to whether we ourselves might currently be headed toward disasters that are difficult to see in advance.

Shantideva around year 700

Shantideva was a Buddhist monk who lived in ca. 685-763. He is best known as the author of A Guide to the Bodhisattva’s Way of Life, which is a remarkable text for its time. The core message is one of profound compassion for all sentient beings, and Shantideva not only describes such universally compassionate ideals, but he also presents stirring encouragements and cogent reasoning in favor of acting on those ideals.

That such a universally compassionate text existed at such an early time is a deeply encouraging fact in one sense. Yet in another sense, it is deeply discouraging. That is, when we think about all the suffering, wars, and atrocities that humanity has caused since Shantideva expounded these ideals — centuries upon centuries of brutal violence and torment imposed upon human and non-human beings — it seems that a certain pessimistic viewpoint gains support.

In particular, it seems that we should be pessimistic about notions along the lines of “compassionate ideals presented in a compelling way will eventually create a benevolent world”. After all, even today, 1300 years later, where we generally pride ourselves of being far more civilized and morally developed than our ancestors, we are still painfully far from observing the most basic of compassionate ideals in relation to other sentient beings.

Of course, one might think that the problem is merely that people have yet to be exposed to compassionate ideals such as those of Shantideva — or those of Mahavira or Mozi, both of whom lived more than a thousand years before Shantideva. But even if we grant that this is the main problem, it still seems that historical cases like these give us some reason to doubt whether most people ever will be exposed to such compassionate ideals, or whether most people would accept such ideals upon being exposed to them, let alone be willing to act on them. The fact that these memes have not caught on to a greater degree than they have, despite existing in such developed forms a long time ago, is some evidence that they are not nearly as virulent as many of us would have hoped.

Speaking for myself at least, I can say that I used to think that people just needed to be exposed to certain compassionate ideals and compassion-based arguments, and then they would change their minds and behaviors due to the sheer compelling nature of these ideals and arguments. But my experience over the years, e.g. with animal advocacy, have made me far more pessimistic about the force of such arguments. And the limited influence of sophisticated expositions of these ideals and arguments made many centuries ago is further evidence for that pessimism (relative to my previous expectations).

Of course, this is not to say that we can necessarily do better than to promote compassion-based ideals and arguments. It is merely to say that the best we can do might be a lot less significant — or be less likely to succeed — than what many of us had initially expected.

Lewis Gompertz and J. Howard Moore in the 19th century

Lewis Gompertz (ca. 1784-1861) and J. Howard Moore (1862-1916) both have a lot in common with Shantideva, as they likewise wrote about compassionate ethics relating to all sentient beings. (And all three of them touched on wild-animal suffering.) Yet Gompertz and Moore, along with other figures in the 19th century, wrote more explicitly about animal rights and moral vegetarianism than did Shantideva. Two observations seem noteworthy with regard to these writings.

One is that Gompertz and Moore both wrote about these topics before the rise of factory farming. That is, even though authors such as Gompertz and Moore made strong arguments against exploiting and killing other animals in the 19th century, humanity still went on to exploit and kill beings on a far greater scale than ever before in the 20th century, indeed on a scale that is still increasing today.

This may be a lesson for those who are working to reduce risks of astronomical suffering at present: even if you make convincing arguments against a moral atrocity that humanity is committing or otherwise heading toward, and even if you make these arguments at an early stage where the atrocity has yet to (fully) develop, this might still not be enough to prevent it from happening on a continuously expanding scale.

The second and closely related observation is that Gompertz and Moore both seem to have focused exclusively on animal exploitation as it existed in their own times. They did not appear to focus on preventing the problem from getting worse, even though one could argue, in hindsight, that such a strategy might have been more helpful overall.

Indeed, even though Moore’s outlook was quite pessimistic, he still seems to have been rather optimistic about the future. For instance, in the preface to his book The Universal Kinship (1906), he wrote: “The time will come when the sentiments of these pages will not be hailed by two or three, and ridiculed or ignored by the rest; they will represent Public Opinion and Law.”

Gompertz appeared similarly optimistic about the future, as he in his Moral Inquiries (1824, p. 48) wrote: “though I cannot conceive how any person can shut his eyes to the general state of misery throughout the universe, I still think that it is for a wise purpose; that the evils of life, which could not properly be otherwise, will in the course of time be rectified …” Neither Gompertz nor Moore seem to have predicted that animal exploitation would be getting far worse in many ways (e.g. the horrible conditions of factory farms) or that it would increase vastly in scale.

This second observation might likewise carry lessons for animal activists and suffering reducers today. If these leading figures of 19th-century animal activism tacitly underestimated the risk that things might get far worse in the future, and as a result paid insufficient attention to such risks, could it be the case that most activists today are similarly underestimating and underprioritizing future risks of things getting even worse still? This question is at least worth pondering.

On a general and concluding note, it seems important to be aware of our tendencies to entertain wishful thinking and to be under the spell of the illusion of control. Just because a group of people have embraced some broadly compassionate values, and in turn identified ongoing atrocities and future risks based on those values, it does not mean that those people will be able to steer humanity’s future such that we avoid these atrocities and risks. The sad reality is that universally compassionate values are far from being in charge.

Radical uncertainty about outcomes need not imply (similarly) radical uncertainty about strategies

Our uncertainty about how the future will unfold is vast, especially on long timescales. In light of this uncertainty, it may be natural to think that our uncertainty about strategies must be equally vast and intractable. My aim in this brief post is to argue that this is not the case.


Contents

  1. Analogies to games, competitions, and projects
  2. Disanalogy in scope?
  3. Three robust strategies for reducing suffering
  4. The golden middle way: Avoiding overconfidence and passivity

Analogies to games, competitions, and projects

Perhaps the most intuitive way to see that vast outcome uncertainty need not imply vast strategic uncertainty is to consider games by analogy. Take chess as an example. It allows a staggering number of possible outcomes on the board, and chess players generally have great uncertainty about how a game of chess will unfold, even as they can make some informed predictions (similar to how we can make informed predictions in the real world).

Yet despite the great outcome uncertainty, there are still many strategies and rules of thumb that are robustly beneficial for increasing one’s chances of winning a game of chess. A trivially obvious one is to not lose pieces without good reason, yet seasoned chess players will know a long list of more advanced strategies and heuristics that tend to be beneficial in many different scenarios. (For an example of such a list, see e.g. here.)

Of course, chess is by no means the only example. Across a wide range of board games and video games, the same basic pattern is found: despite vast uncertainty about specific outcomes, there are clear heuristics and strategies that are robustly beneficial.

Indeed, this holds true in virtually any sphere of competition. Politicians cannot predict exactly how an election campaign will unfold, yet they can usually still identify helpful campaign strategies; athletes cannot predict how a given match will develop, yet they can still be reasonably confident about what constitutes good moves and game plans; companies cannot predict market dynamics in detail, yet they can still identify many objectives that would help them beat the competition (e.g. hire the best people and ensure high customer satisfaction).

The point also applies beyond the realm of competition. For instance, when engineers set out to build a big project, there are usually many uncertainties as to how the construction process is going to unfold and what challenges might come up. Yet they are generally still able to identify strategies that can address unforeseen challenges and get the job done. The same goes for just about any project, including cooperative projects between parties with different aims: detailed outcomes are exceedingly difficult to predict, yet it is generally (more) feasible to identify beneficial strategies.

Disanalogy in scope?

One might object that the examples above all involve rather narrow aims, and those aims differ greatly from impartial aims that relate to the interests of all sentient beings. This is a fair point, yet I do not think it undermines these analogies or the core point that they support.

Granted, when we move from narrower to broader aims and endeavors, our uncertainty about the relevant outcomes will tend to increase — e.g. when our aims involve far more beings and far greater spans of time. And when the outcome space and its associated uncertainty increases, we should also expect our strategic uncertainty to become greater. Yet it plausibly still holds true that we can identify at least some reasonably robust strategies, despite the increase in uncertainty that is associated with impartial aims. At the minimum, it seems plausible that our strategic uncertainty is still smaller than our outcome uncertainty.

After all, if such a pattern of lower strategic uncertainty holds true of a wide range of endeavors on a smaller scale, it seems reasonable to expect that it will apply on larger scales too. Besides, it appears that at least some of the examples mentioned in the previous section would still stand even if we greatly increased their scale. For example, in the case of many video games, it seems that we could increase the scale of the game by an arbitrary amount without meaningfully changing the most promising strategies — e.g. accumulate resources, gain more insights, strengthen your position. And similar strategies are plausibly quite robust relative to many goals in the real world as well, on virtually any scale.

Three robust strategies for reducing suffering

If we grant that we can identify some strategies that are robustly beneficial from an impartial perspective, this naturally raises the question as to what these strategies might be. The following are three examples of strategies for reducing suffering that seem especially robust and promising to me. (This is by no means an exhaustive list.)

  • Movement and capacity building: Expand the movement of people who strive to reduce suffering, and build a healthy and sustainable culture around this movement. Capacity building also includes efforts to increase the insights and resources available to the movement.
  • Promote concern for suffering: Increase the level of priority that people devote to the prevention of suffering, and increase the amount of resources that society devotes to its alleviation.
  • Promote cooperation: Increase society’s ability and willingness to engage in cooperative dialogues and positive-sum compromises that can help steer us away from bad outcomes.

The golden middle way: Avoiding overconfidence and passivity

To be clear, I do not mean to invite complacency about the risk that some apparently promising strategies could prove harmful. But I think it is worth keeping in mind that, just as there are costs associated with overconfidence, there are also costs associated with being too uncertain and too hesitant to act on the strategies that seem most promising. All in all, I think we have good reasons to pursue strategies such as those listed above, while still keeping in mind that we do face great strategic uncertainty.

What does a future dominated by AI imply?

Among altruists working to reduce risks of bad outcomes due to AI, I sometimes get the impression that there is a rather quick step from the premise “the future will be dominated by AI” to a practical position that roughly holds that “technical AI safety research aimed at reducing risks associated with fast takeoff scenarios is the best way to prevent bad AI outcomes”.

I am not saying that this is the most common view among those who work to prevent bad outcomes due to AI. Nor am I saying that the practical position outlined above is necessarily an unreasonable one. But I think I have seen (something like) this sentiment assumed often enough for it to be worthy of a critique. My aim in this post is to argue that there are many other practical positions that one could reasonably adopt based on that same starting premise.


Contents

  1. “A future dominated by AI” can mean many things
    1. “AI” can mean many things
    2. “Dominated by” can mean many things
    3. Combinations of many things
  2. Future AI dominance does not imply fast AI development
  3. Fast AI development does not imply concentrated AI development
  4. “A future dominated by AI” does not mean that either “technical AI safety” or “AI governance” is most promising
  5. Concluding clarification

“A future dominated by AI” can mean many things

“AI” can mean many things

It is worth noting that the premise that “the future will be dominated by AI” covers a wide range of scenarios. After all, it covers scenarios in which advanced machine learning software is in power; scenarios in which brain emulations are in power; as well as scenarios in which humans stay in power while gradually updating their brains with gene technologies, brain implants, nanobots, etc., such that their intelligence would eventually be considered (mostly) artificial intelligence by our standards. And there are surely more categories of AI than just the three broad ones outlined above.

“Dominated by” can mean many things

The words “in power” and “dominated by” can likewise mean many different things. For example, they could mean anything from “mostly in power” and “mostly dominated by” to “absolutely in power” and “absolutely dominated by”. And these respective terms cover a surprisingly wide spectrum.

After all, a government in a democratic society could reasonably be claimed to be “mostly in power” in that society, and a future AI system that is given similar levels of power could likewise be said to be “mostly in power” in the society it governs. By contrast, even the government of North Korea falls considerably short of being “absolutely in power” on a strong definition of that term, which hints at the wide spectrum of meanings covered by the general term “in power”.

Note that the contrast above actually hints at two distinct (though related) dimensions on which different meanings of “in power” can vary. One has to do with the level of power — i.e. whether one has more or less of it — while the other has to do with how the power is exercised, e.g. whether it is democratic or totalitarian in nature.

Thus, “a future society with AI in power” could mean a future in which AI possesses most of the power in a democratically elected government, or it could mean a future in which AI possesses total power with no bounds except the limits of physics.

Combinations of many things

Lastly, we can make a combinatorial extension of the points made above. That is, we should be aware that “a future dominated by AI” could — and is perhaps likely to — combine different kinds of AI. For instance, one could imagine futures that contain significant numbers of AIs from each of the three broad categories of AI mentioned above.

Additionally, these AIs could exercise power in distinct ways and in varying degrees across different parts of the world. For example, some parts of the world might make decisions in ways that resemble modern democratic processes, with power distributed among many actors, while other parts of the world might make decisions in ways that resemble autocratic decision procedures.

Such a diversity of power structures and decision procedures may be especially likely in scenarios that involve large-scale space expansion, since different parts of the world would then eventually be causally disconnected, and since a larger volume of AI systems presumably renders greater variation more likely in general.

These points hint at the truly vast space of possible futures covered by a term such as “a future dominated by AI”.

Future AI dominance does not imply fast AI development

Another conceptual point is that “a future dominated by AI” does not imply that technological or social progress toward such a future will happen soon or that it will occur suddenly. Furthermore, I think one could reasonably argue that such an imminent or sudden change is quite unlikely (though it obviously becomes more likely the broader our conception of “a future dominated by AI” is).

An elaborate justification for my low credence in such sudden change is beyond the scope of this post, though I can at least note that part of the reason for my skepticism is that I think trends and projections in both computer hardware and economic growth speak against such rapid future change. (For more reasons to be skeptical, see Reflections on Intelligence and “A Contra AI FOOM Reading List”.)

A future dominated by AI could emerge through a very gradual process that occurs over many decades or even hundreds of years (conditional on it ever happening). And AI scenarios involving such gradual development could well be both highly likely and highly consequential.

An objection against focusing on such slow-growth scenarios might be that scenarios involving rapid change have higher stakes, and hence they are more worth prioritizing. But it is not clear to me why this should be the case. As I have noted elsewhere, a so-called value lock-in could also happen in a slow-growth scenario, and the probability of success — and of avoiding accidental harm — may well be higher in slow-growth scenarios (cf. “Which World Gets Saved”).

The upshot could thus be the very opposite, namely that it is ultimately more promising to focus on scenarios with relatively steady growth in AI capabilities and power. (I am not claiming that this focus is in fact more promising; my point is simply that it is not obvious and that there are good reasons to question a strong focus on fast-growth scenarios.)

Fast AI development does not imply concentrated AI development

Likewise, even if we grant that the pace of AI development will increase rapidly, it does not follow that this growth will be concentrated in a single (or a few) AI system(s), as opposed to being widely distributed, akin to an entire economy of machines that grow fast together. This issue of centralized versus distributed growth was in fact the main point of contention in the Hanson-Yudkowsky FOOM debate; and I agree with Hanson that distributed growth is considerably more likely.

Similar to the argument outlined in the previous section, one could argue that there is a wager to focus on scenarios that entail highly concentrated growth over those that involve highly distributed growth, even if the latter may be more likely. Perhaps the main argument in favor of this view is that it seems that our impact can be much greater if we manage to influence a single system that will eventually gain power compared to if our influence is dispersed across countless systems.

Yet I think there are good reasons to doubt that argument. One reason is that the strategy of influencing such a single AI system may require us to identify that system in advance, which might be a difficult bet that we could easily get wrong. In other words, our expected influence may be greatly reduced by the risk that we are wrong about which systems are most likely to gain power. Moreover, there might be similar and ultimately more promising levers for “concentrated influence” in scenarios that involve more distributed growth and power. Such levers may include formal institutions and societal values, both of which could exert a significant influence on the decisions of a large number of agents simultaneously — by affecting the norms, laws, and social equilibria under which they interact.

“A future dominated by AI” does not mean that either “technical AI safety” or “AI governance” is most promising

Another impression I have is that we sometimes tacitly assume that work on “avoiding bad AI outcomes” will fall either in the categories of “technical AI safety” or “AI governance”, or at least that it will mostly fall within these categories. But I do not think that this is the case, partly for the reasons alluded to above.

In particular, it seems to me that we sometimes assume that the aim of influencing “AI outcomes” is necessarily best pursued in ways that pertain quite directly to AI today. Yet why should we assume this to be the case? After all, it seems that there are many plausible alternatives.

For example, one could think that it is generally better to pursue broad investments so as to build flexible resources that make us better able to tackle these problems down the line — e.g. investments toward general movement building and toward increasing the amount of money that we will be able to spend later, when we might be better informed and have better opportunities to pursue direct work.

A complementary option is to focus on the broader contextual factors hinted at in the previous section. That is, rather than focusing primarily on the design of the AI systems themselves, or on the laws that directly govern their development, one may focus on influencing the wider context in which they will be developed and deployed — e.g. general values, institutions, diplomatic relations, collective knowledge and wisdom, etc. After all, the broader context in which AI systems will be developed and put into action could well prove critical to the outcomes that future AI systems will eventually create.

Note that I am by no means saying that work on technical AI safety or AI governance is not worth pursuing. My point is merely that these other strategies focused on building flexible resources and influencing broader contextual factors should not be overlooked as ways to influence “a future dominated by AI”. Indeed, I believe that these strategies are among the most promising ways in which we can have a beneficial such influence at this point.

Concluding clarification

On a final note, I should clarify that the main conceptual points I have been trying to make in this post likely do not contradict the explicitly endorsed views of anyone who works to reduce risks from AI. The objects of my concern are more (what I perceive to be) certain implicit models and commonly employed terminologies that I worry may distort how we think and talk about these issues.

Specifically, it seems to me that there might be a sort of collective availability heuristic at work, through which we continually boost the salience of a particular AI narrative — or a certain class of AI scenarios — along with a certain terminology that has come to be associated with that narrative (e.g. ‘AI takeoff’, ‘transformative AI’, etc). Yet if we change our assumptions a bit, or replace the most salient narrative with another plausible one, we might find that this terminology does not necessarily make a lot of sense anymore. We might find that our typical ways of thinking about AI outcomes may be resting on a lot of implicit assumptions that are more questionable and more narrow than we tend to realize.

Reasons to include insects in animal advocacy

I have seen some people claim that animal activists should primarily be concerned with certain groups of numerous vertebrates, such as chickens and fish, whereas we should not be concerned much, if at all, with insects and other small invertebrates. (See e.g. here.) I think there are indeed good arguments in favor of emphasizing chickens and fish in animal advocacy, yet I think those same arguments tend to support a strong emphasis on helping insects as well. My aim in this post is to argue that we have compelling reasons to include insects and other small vertebrates in animal advocacy.


Contents

  1. A simplistic sequence argument: Smaller beings in increasingly large numbers
    1. The sequence
    2. Why stop at chickens or fish?
  2. Invertebrate vs. vertebrate nervous systems
    1. Phylogenetic distance
    2. Behavioral and neurological evidence
    3. Nematodes and extended sequences
  3. Objection based on appalling treatment
  4. Potential biases
    1. Inconvenience bias
    2. Smallness bias
    3. Disgust and fear reflexes
    4. Momentum/status quo bias
  5. Other reasons to focus more on small invertebrates
    1. Neglectedness
    2. Opening people’s eyes to the extent of suffering and harmful decisions
    3. Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion
    4. Qualifications and counter-considerations
  6. My own view on strategy in brief
  7. Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering
  8. Acknowledgments

A simplistic sequence argument: Smaller beings in increasingly large numbers

As a preliminary motivation for the discussion, it may be helpful to consider the sequence below.

I should first of all clarify what I am not claiming in light of the following sequence. I am not making any claims about the moral relevance of the neuron counts of individual beings or groups of beings (that is a complicated issue that defies simple answers). Nor am I claiming that we should focus mostly on helping beings such as land arthropods and nematodes. The claim I want to advance is a much weaker one, namely that, in light of the sequence below, it is hardly obvious that we should focus mostly on helping chickens or fish.

The sequence

At any given time, there are roughly:

  • 780 million farmed pigs, with an estimated average neuron count of 2.2 billion. Total neuron count: ~1.7 * 10^18.
  • 33 billion farmed chickens, with an estimated average neuron count of 200 million. Total neuron count: ~6.6 * 10^18.
  • 10^15 fish (the vast majority of whom are wild fish), with an estimated average neuron count of 1 million neurons (this number lies between the estimated neuron count of a larval zebrafish and an adult zebrafish; note that there is great uncertainty in all these estimates). Total neuron count: ~10^21. It is estimated that humanity kills more than a trillion fish a year, and if we assume that they likewise have an average neuron count of around 1 million neurons, the total neuron count of these beings is ~10^18.
  • 10^19 land arthropods, with an estimated average neuron count of 15,000 neurons (some insects have brains with more than a million neurons, but most arthropods appear to have considerably fewer neurons). Total neuron count: ~1.5*10^23. If humanity kills roughly the same proportion of land arthropods as the proportion of fish that we kill (e.g. through insecticides and insect farming), then the total neuron count of the land arthropods we kill is ~10^20.
  • 10^21 nematodes, with an estimated average neuron count of 300 neurons. Total neuron count: ~3 * 10^23.

Why stop at chickens or fish?

The main argument that supports a strong emphasis on chickens or fish is presumably their large numbers (as well as their poor treatment, which I discuss below). Yet the numbers-based argument that supports a strong emphasis on chickens and fish could potentially also support a strong emphasis on small invertebrates such as insects. It is thus not clear why we should place a strict boundary right below chickens or fish beyond which this numbers-based argument no longer applies. After all, each step of this sequence entails a similar pattern in terms of crude numbers: we have individual beings who on average have 1-3 orders of magnitude fewer neurons yet who are 1-5 orders of magnitude more numerous than the beings in the previous step.

Invertebrate vs. vertebrate nervous systems

A defense that one could give in favor of placing a relatively strict boundary below fish is that we here go from vertebrates to invertebrates, and we can be significantly less sure that invertebrates suffer compared to vertebrates.

Perhaps this defense has some force. But how much? Our confidence that the beings in this sequence have the capacity to suffer should arguably decrease at least somewhat in each successive step, yet should the decrease in confidence from fish to insects really be that much bigger than in the previous steps?

Phylogenetic distance

Based on the knowledge that we ourselves can suffer, one might think that a group of beings’ phylogenetic distance from us (i.e. how distantly related they are to us) can provide a tentative prior as to whether those beings can suffer, and regarding how big a jump in confidence we should make for different kinds of beings. Yet phylogenetic distance per se arguably does not support a substantially greater decrease in confidence in the step from fish to insects compared to the previous steps in the sequence above. 

The last common ancestor of humans and insects appears to have lived around 575 million years ago, whereas the last common ancestor of humans and fish lived around 400-485 million years ago (depending on the species of fish; around 420-460 million years for the most numerous fish). By comparison, the last common ancestor of humans and chickens lived around 300 million years ago, while the last common ancestor of humans and pigs lived around 100-125 million years ago.

Thus, when we look at different beings’ phylogenetic distance from humans in these temporal terms, it does not seem that the step between fish and insects (in the sequence above) is much larger than the step between fish and chickens or between chickens and pigs. In each case, the increase in the “distance” appears to be something like 100-200 million years.

Behavioral and neurological evidence

Of course, “phylogenetic distance from humans” does not represent strong evidence as to whether a group of beings has the capacity to suffer. After all, humans are more closely related to starfish (~100 neurons) than to octopuses (~500 million neurons), and we have much stronger reasons to think that the latter can suffer, based on behavioral and neurological evidence (cf. the Cambridge Declaration on Consciousness).

Does such behavioral and neurological evidence support a uniquely sharp drop in confidence regarding insect sentience compared to fish sentience? Arguably not, as there is mounting evidence of pain in (small) invertebrates, both in terms of behavioral and neuroscientific evidence. Additionally, there are various commonalities in the respective structures and developments of arthropod and vertebrate brains.

In light of this evidence, it seems that a sharp drop in confidence regarding pain in insects (versus pain in fish) requires a justification.

Nematodes and extended sequences

I believe that a stronger decrease in confidence is warranted when comparing arthropods and nematodes, for a variety of reasons: the nematode nervous system consists primarily of a so-called nerve ring, which is quite distinct from the brains of arthropods, and unlike the neurons of arthropods (and other animals), nematode neurons do not have action potentials or orthologs of sodium-channels (e.g. Nav1 and Nav2), which appear to play critical roles to pain signaling in other animals.

However, the evidence of pain in nematodes should not be understated either. The probability of pain in nematodes still seems non-negligible, and it arguably justifies substantial concern for (the risk of) nematode pain, even if it does not overall warrant a similarly strong concern and priority as does the suffering of chickens, fish, and arthropods.

This discussion also hints at why the sequence argument above need not imply that we should primarily focus on risks of suffering in bacteria or atoms, as one may reasonably hold that the probability of such suffering decreases by a greater rate than the number of the purported sufferers increases in such extended sequences.

Objection based on appalling treatment

Another reason one could give in favor of focusing on chickens and fish is that they are treated in particularly appalling ways, e.g. they are often crammed in extremely small spaces and killed in horrific ways. I agree that humanity’s abhorrent treatment of chickens and fish is a strong additional reason to prioritize helping them. Yet it seems that this same argument also favors a focus on insects.

After all, humanity poisons vast numbers of insects with insecticides that may cause intensely painful deaths, and in various insect farming practices — which are sadly growing — insects are commonly boiled, fried, or roasted alive. These practices seem no less cruel and appalling than the ways in which we treat and kill chickens and fish.

Potential biases

There are many reasons to expect that we are biased against giving adequate moral consideration to small invertebrates such as insects (in addition to our general speciesist bias). The four plausible biases listed below are by no means exhaustive.

Inconvenience bias

It is highly inconvenient if insects can feel pain, as it would imply that 1) we should be concerned about far more beings, which greatly complicates our ethical and strategic considerations (compared to if we just focused on vertebrates); 2) the extent of pain and suffering in the world is far greater than we would otherwise have thought, which may be a painful conclusion to accept; and 3) we should take far greater care not to harm insects in our everyday lives. All these inconveniences likely motivate us to conclude that insects are not sentient or that they are not that important in the bigger picture.

Smallness bias

Insects tend to be rather small, even compared to fish, which might make us reluctant to grant them moral consideration. In other words, our intuitions plausibly display a general sizeist bias. As a case in point, ants have more than twice as many neurons as lobsters, and there does not seem to be any clear reason to think that ants are less able to feel pain than are lobsters. Yet ants are obviously much smaller than lobsters, which may explain why people seem to show considerably more concern for lobsters than for ants, and why the number of people who believe that lobsters can feel pain (more than 80 percent in a UK survey) is significantly larger than the number of people who believe that ants can feel pain (around 56 percent). Of course, this pattern may also be partially explained by the inconvenience bias, since the acceptance of pain in lobsters seems less inconvenient than does the acceptance of pain in ants; but size likely still plays a significant role. (See also Vinding, 2015, “A Short Note on Insects”.)

Disgust and fear reflexes

It seems that many people have strong disgust reactions to (at least many) small invertebrates, such as cockroaches, maggots, and spiders. Some people may also feel fear toward these animals, or at least feel that they are nuisance. Gut reactions of this kind may well influence our moral evaluations of small invertebrates in general, even though they ideally should not.

Momentum/status quo bias

The animal movement has not historically focused on invertebrates, and hence there is little momentum in favor of focusing on their plight. That is, our status quo bias seems to favor a focus on helping the vertebrates whom the animal movement have traditionally focused on. To be sure, status quo bias also works against concern for fish and chickens to some degree (which is worth controlling for as well), yet chickens and fish have still received considerably more focus from the animal movement, and hence status quo bias likely negates concern for insects to an even stronger extent.

These biases should give us pause when we are tempted to reflexively dismiss the suffering of small invertebrates.

Other reasons to focus more on small invertebrates

In addition to the large number of arthropods and the evidence for arthropod pain, what other reasons might support a greater focus on small invertebrates?

Neglectedness

An obvious reason is the neglect of these beings. As hinted in the previous section, a focus on helping small invertebrates has little historical momentum, and it is still extremely neglected in the broader animal movement today. This seems to me a fairly strong reason to focus more on invertebrates on the margin, or at the very least to firmly include invertebrates in one’s advocacy.

Opening people’s eyes to the extent of suffering and harmful decisions

Another, perhaps less obvious reason is that concern for smaller beings such as insects might help reduce risks of astronomical suffering. This claim should immediately raise some concerns about suspicious convergence, and as I have argued elsewhere, there is indeed a real risk that expanding the moral circle could increase rather than reduce future suffering. Partly for this reason, it might be better to promote a deeper concern for suffering than to promote wider moral circles (see also Vinding, 2020, ch. 12).

Yet that being said, I also think there is a sense in which wider moral circles can help promote a deeper concern for suffering, and not least give people a more realistic picture of the extent of suffering in the world. Simply put, a moral outlook that includes other vertebrates besides humans will see far more severe suffering and struggle in the world, and a perspective that also includes invertebrates will see even more suffering still. Indeed, not only does such an outlook open one’s eyes to more existing suffering, but it may also open one’s eyes (more fully) to humanity’s capacity to ignore suffering and to make decisions that actively increase it, even today.

Risks of spreading invertebrates to space: Beings at uniquely high risk of suffering due to human space expansion

Another way in which greater concern for invertebrate suffering might reduce risks of astronomical suffering is that small invertebrates seem to be among the animals who are most likely to be sent into space on a large scale in the future (e.g. because they may survive better in extreme environments). Indeed some invertebrates — including fruit flies, crickets, and wasps — have already been sent into space, and some tardigrades were even sent to the moon (though the spacecraft crashed and probably none survived). Hence, the risk of spreading animals to space plausibly gives us additional reason to include insects in animal advocacy.

Qualifications and counter-considerations

To be clear, the considerations reviewed above merely push toward increasing the emphasis that we place on small beings such as insects — they are not necessarily decisive reasons to give primary focus to those beings. In particular, these arguments do not make a case for focusing on helping insects over, say, new kinds of beings who might be created in the future in even larger numbers.

It is also worth noting that there may be countervailing reasons not to emphasize insects more. One is that it could risk turning people away from the plight of non-human animals and the horror of suffering, which many people might find difficult to relate to if insect suffering constitutes the main focus at a practical level. This may be a reason to favor a greater focus on the suffering of larger and (for most people) more relatable animals.

I think the considerations on both sides need to be taken into account, including considerations about future beings who may become even more numerous and more neglected than insects. The upshot, to my mind, is that while focusing primarily on helping insects is probably not the best way to reduce suffering (for most of us), it still seems likely that 1) promoting greater concern for insects, as well as 2) promoting concrete policies that help insects, both constitute a significant part of the optimal portfolio of aims to push for.

My own view on strategy in brief

While questions about which beings seem most worth helping (on the margin) can be highly relevant for many of our decisions, there are also many strategic decisions that do not depend critically on how we answer these questions.

Indeed, my own view on strategies for reducing animal suffering is that we generally do best by pursuing robust and broad strategies that help many beings simultaneously, without focusing too narrowly on any single group of beings. (Though as hinted above, I think there are many situations where it makes sense to focus on interventions that help specific groups of beings.)

This is one of the reasons why I tend to favor an antispeciesist approach to animal advocacy, with a particular emphasis on the importance of suffering. Such an approach is still compatible with highlighting the scale and neglectedness of the suffering of chickens, fish, and insects, as well as the scale and neglectedness of wild-animal suffering. That is, a general approach thoroughly “scope-informed” about the realities on the ground.

And such a comprehensive approach seems further supported when we consider risks of astronomical suffering (despite the potential drawbacks alluded to earlier). In particular, when trying to help other animals today, it is worth asking how our efforts might be able to help future beings as well, since failing to do so could be a lost opportunity to spare large numbers of beings from suffering. (For elaboration, see “How the animal movement could do even more good” and Vinding, 2022, sec. 10.8-10.9.)

Final clarification: Numbers-based arguments need not assume that large amounts of mild suffering can be worse than extreme suffering

An objection against numbers-based arguments for focusing more on insects is that small pains, or a high probability of small pains, cannot be aggregated to be worse than extreme suffering.

I agree with the view that small pains do not add up to be worse than extreme suffering, yet I think it is mistaken to think that this view undermines any numbers-based argument for emphasizing insects more in animal advocacy. The reason, in short, is that we should also assign some non-negligible probability to the possibility that insects experience extreme suffering (e.g. in light of the evidence for pain in insects cited above). And this probability, combined with the very large number of insects, implies that there are many instances of extreme suffering occurring among insects in expectation. After all, the vast number of insects should lead us to believe that there are many beings who have experiences at the (expected) tail-end of the very worst experiences that insects can have.

As a concluding thought experiment that may challenge comfortable notions regarding the impossibility of intense pain among insects, consider that you were given the choice between A) living as a chicken inside a tiny battery cage for a full day, or B) being continually born and reborn as an insect who has the experience of being burned or crushed alive, for a total of a million days (for concreteness, you may imagine that you will be reborn as a butterfly like the one pictured at the top of this post).

If we were really given this choice, I doubt that we would consider it an easy choice in favor of B. I doubt that we would dismiss the seriousness of the worst insect suffering.

Acknowledgments

For their helpful comments, I am grateful to Tobias Baumann, Simon Knutsson, and Winston Oswald-Drummond.

Why I don’t prioritize consciousness research

For altruists trying to reduce suffering, there is much to be said in favor of gaining a better understanding of consciousness. Not only may it lead to therapies that can mitigate suffering in the near term, but it may also help us in our large-scale prioritization efforts. For instance, clarifying which beings can feel pain is important for determining which causes and interventions we should be working on to best reduce suffering.

These points notwithstanding, my own view is that advancing consciousness research is not among the best uses of marginal resources for those seeking to reduce suffering. My aim in this post is to briefly explain why I hold this view.


Contents

  1. Reason I: Scientific progress seems less contingent than other important endeavors
  2. Reason II: Consciousness research seems less neglected than other important endeavors
    1. Objection: The best consciousness research is also neglected
  3. Reason III: Prioritizing the fundamental bottleneck — the willingness problem
  4. Reason IV: A better understanding of consciousness might enable deliberate harm
    1. Objection: Consciousness research is the best way to address these problems
    2. Objection: We should be optimistic about solving these problems
  5. Acknowledgments

Reason I: Scientific progress seems less contingent than other important endeavors

Scientific discoveries generally seem quite convergent, so much so that the same discovery is often made independently at roughly the same time (cf. examples of “multiple discovery”). This is not surprising: if we are trying to uncover an underlying truth — as per the standard story of science — we should expect our truth-seeking efforts to eventually converge upon the best explanation, provided that our hypotheses can be tested.

This is not to say that there is no contingency whatsoever in science, which there surely is — after all, the same discovery can be formalized in quite different ways (famous examples include the competing calculus notations of Newton and Leibniz, as well as distinct yet roughly equivalent formalisms of quantum mechanics). But the level of contingency in science still seems considerably lower than the level of contingency found in other domains, such as when it comes to which values people hold or what political frameworks they embrace.

To be clear, it is not that values and political frameworks are purely contingent either, as there is no doubt some level of convergence in these respects as well. Yet the convergence still seems significantly lower (and the contingency higher). For example, compare two of the most important events in the early 20th century in these respective domains: the formulation of the general theory of relativity (1915) and the communist revolution in Russia (roughly 1917-1922). While the formulation of the theory of general relativity did involve some contingency, particularly in terms of who and when, it seems extremely likely that the same theory would eventually have been formulated anyway (after all, many of Einstein’s other discoveries were made independently, roughly at the same time).

In comparison, the outcome of the Russian Revolution appears to have been far more contingent, and it seems that greater foreign intervention (as well as other factors) could easily have altered the outcome of the Russian Civil War, and thereby changed the course of history quite substantially.

This greater contingency of values and political systems compared to that of scientific progress suggests that we can generally make a greater counterfactual difference by focusing on the former, other things being equal.

Reason II: Consciousness research seems less neglected than other important endeavors

Besides contingency, it seems that there is a strong neglectedness case in favor of prioritizing the promotion of better values and political frameworks over the advancement of consciousness research.

After all, there are already many academic research centers that focus on consciousness research. By contrast, there is not a single academic research center that focuses primarily on the impartial reduction of suffering (e.g. at the level of values and political frameworks). To be sure, there is a lot of academic work that is relevant to the reduction of suffering, yet only a tiny fraction of this work adopts a comprehensive perspective that includes the suffering of all sentient beings across all time; and virtually none of it seeks to clarify optimal priorities relative to that perspective. Such impartial work seems exceedingly rare.

This difference in neglectedness likewise suggests that it is more effective to promote values and political frameworks that aim to reduce the suffering of all sentient beings — as well as to improve our strategic insights into effective suffering reduction — than to push for a better scientific understanding of consciousness.

Objection: The best consciousness research is also neglected

One might object that certain promising approaches to consciousness research (that we could support) are also extremely neglected, even if the larger field of consciousness research is not. Yet granting that this is true, I still think work on values and political frameworks (of the kind alluded to above) will be more neglected overall, considering the greater convergence of science compared to values and politics.

That is, the point regarding scientific convergence suggests that uniquely promising approaches to understanding consciousness are likely to be discovered eventually. Or at least it suggests that these promising approaches will be significantly less neglected than will efforts to promote values and political systems centered on effective suffering reduction for all sentient beings.

Reason III: Prioritizing the fundamental bottleneck — the willingness problem

Perhaps the greatest bottleneck to effective suffering reduction is humanity’s lack of willingness to this end. While most people may embrace ideals that give significant weight to the reduction of suffering in theory, the reality is that most of us tend to give relatively little priority to the reduction of suffering in terms of our revealed preferences and our willingness to pay for the avoidance of suffering (e.g. in our consumption choices).

In particular, there are various reasons to think that our (un)willingness to reduce suffering is a bigger bottleneck than is our (lack of) understanding of consciousness. For example, if we look at what are arguably the two biggest sources of suffering in the world today — factory farming and wild-animal suffering — it seems that the main bottleneck to human progress on both of these problems is a lack of willingness to reduce suffering, whereas a greater knowledge of consciousness does not appear to be a key bottleneck. After all, most people in the US already report that they believe many insects to be sentient, and a majority likewise agree that farmed animals have roughly the same ability to experience pain as humans. Beliefs about animal sentience per se thus do not appear to be a main bottleneck, as opposed to speciesist attitudes and institutions that disregard non-human suffering.

In general, it seems to me that the willingness problem is best tackled by direct attempts to address it, such as by promoting greater concern for suffering, by reducing the gap between our noble ideals and our often less than noble behavior, and by advancing institutions that reflect impartial concern for suffering to a greater extent. While a better understanding of consciousness may be helpful with respect to the willingness problem, it still seems unlikely to me that consciousness research is among the very best ways to address it. 

Reason IV: A better understanding of consciousness might enable deliberate harm

A final reason to prioritize other pursuits over consciousness research is that a better understanding of consciousness comes with significant risks. That is, while a better understanding of consciousness would allow benevolent agents to reduce suffering, it may likewise allow malevolent agents to increase suffering.

This risk is yet another reason why it seems safer and more beneficial to focus directly on the willingness problem and the related problem of keeping malevolent agents out of power — problems that we have by no means found solutions to, and which we are not guaranteed to find solutions to in the future. Indeed, given how serious these problems are, and how little control we have with regard to risks of malevolent individuals in power — especially in autocratic states — it is worth being cautious about developing tools and insights that can potentially increase humanity’s ability to cause harm.

Objection: Consciousness research is the best way to address these problems

One might argue that consciousness research is ultimately the best way to address both the willingness problem and the risk of malevolent agents in power, or that it is the best way to solve at least one of those problems. Yet this seems doubtful to me, and like somewhat of a suspicious convergence. Given the vast range of possible interventions we could pursue to address these problems, we should be a priori skeptical of any intervention that we may propose as the best one, particularly when the path to impact is highly indirect.

Objection: We should be optimistic about solving these problems

Another argument in favor of consciousness research might be that we have reason to be optimistic about solving both the willingness problem and the malevolence problem, since the nature of selection pressure is about to change. Thanks to modern technological tools, benevolent agents will soon be able to design the world with greater foresight. We will deliberately choose genes and institutions to ensure that benevolence becomes realized to an ever greater extent, and in effect practically solve both the willingness problem and the malevolence problem.

But this argument seems to overlook two things. First, there is no guarantee that most humans will make actively benevolent choices, even if their choices will not be outright malevolent either. Most people may continue to optimize for things other than impartial benevolence, such as personal status and prestige, and they may continue to show relatively little concern for non-human beings.

Second, and perhaps more worryingly, modern technologies that enable intelligent foresight and deliberation for benevolent agents could be just as empowering for malevolent agents. The arms race between cooperators and exploiters is an ancient one, and I think we have strong reasons to doubt that this arms race will disappear in the next few decades or centuries. On the contrary, I believe we have good grounds to expect this arms race to get intensified, which to my mind is all the more reason to focus directly on reducing the risks posed by malevolent agents, and to promote norms and institutions that favor cooperation. And again, I am skeptical that consciousness research is among the best ways to achieve these aims, even if it might be beneficial overall.

Acknowledgments

For their comments, I thank Tobias Baumann, Winston Oswald-Drummond, and Jacob Shwartz-Lucas.

Blog at WordPress.com.

Up ↑