Expanding humanity’s moral circle such that it includes all sentient beings seems among the most urgent and important missions before us. And yet there is a significant risk that such greater moral inclusion might in fact end up increasing future suffering. As Brian Tomasik notes:
One might ask, “Why not just promote broader circles of compassion, without a focus on suffering?” The answer is that more compassion by itself could increase suffering. For example, most people who care about wild animals in a general sense conclude that wildlife habitats should be preserved, in part because these people aren’t focused enough on the suffering that wild animals endure. Likewise, generically caring about future digital sentience might encourage people to create as many happy digital minds as possible, even if this means also increasing the risk of digital suffering due to colonizing space. Placing special emphasis on reducing suffering is crucial for taking the right stance on many of these issues.
Indeed, many classical utilitarians do include non-human animals in their moral circle, yet they still consider it permissible, indeed in some sense morally good, that we bring individuals into existence so that they can live “net positive lives” and we can eat them (I have argued that this view is mistaken, almost regardless of what kind of consequentialist view one assumes). And some even seem to think that most lives on factory farms might plausibly be such “net positive lives”. A wide circle of moral consideration clearly does not guarantee an unwillingness to allow large amounts of suffering to be brought into the world.
More generally, there is a considerable number of widely endorsed ethical positions that favor bringing about larger rather than smaller populations of the beings who belong to our moral circle, at least provided that certain conditions are met in the lives of these beings. And many of these ethical positions have quite loose such conditions, which implies that these views can easily permit, and even demand, the creation of a lot of suffering for the sake of some (supposedly) greater good.
Indeed, the truth is that even a view that requires an enormous amount of happiness to outweigh a given amount of suffering might still easily permit the creation of large amounts of suffering, as illustrated by the following consideration (quoted from the penultimate chapter of my book on effective altruism):
consider the practical implications of the following two moral principles: 1) we will not allow the creation of a single instance of the worst forms of suffering […] for any amount of happiness, and 2) we will allow one day of such suffering for ten years of the most sublime happiness. What kind of future would we accept with these respective principles? Imagine a future in which we colonize space and maximize the number of sentient beings that the accessible universe can sustain over the entire course of the future, which is probably more than 10^30. Given this number of beings, and assuming that these beings each live a hundred years, principle 2) above would appear to permit a space colonization that all in all creates more than 10^28 years of [the worst forms of suffering], provided that the other states of experience are sublimely happy. This is how extreme the difference can be between principles like 1) and 2); between whether we consider suffering irredeemable or not. And notice that even if we altered the exchange rate by orders of magnitude — say, by requiring 10^15 times more sublime happiness per unit of extreme suffering than we did in principle 2) above — we would still allow an enormous amount of extreme suffering to be created; in the concrete case of requiring 10^15 times more happiness, we would allow more than 10,000 billion years of [the worst forms of suffering].
This highlights the importance of thinking deeply about which trade-offs, if any, we find acceptable with respect to the creation of suffering, including extreme suffering.
The considerations above concerning popular ethical positions that support larger future populations imply that there is some probability — a seemingly low yet still significant probability — that a more narrow moral circle may in fact lead to less future suffering for the morally excluded beings (e.g. by making efforts to bring these beings into existence, on Earth and beyond, less likely).
In spite of this risk, I still consider generic moral circle expansion quite beneficial in expectation. Yet it seems less beneficial, and significantly less robust (with respect to the goal of reducing extreme suffering) than does the promotion of suffering-focused values. And it seems less robust and less beneficial still than does the twin-track strategy of focusing on both expanding our moral circle and deepening our concern for suffering. Both seem necessary yet insufficient on their own. If we deepen concern for suffering without broadening the moral circle, our deepened concern risks failing to pertain to the vast majority of sentient beings. On the other hand, if we broaden our moral circle without deepening our concern for suffering, we may end up allowing the beings within our moral circle to endure enormous amounts of suffering.