Notes on the Utility of Anti-Speciesist Advocacy

I recently took part in a panel discussion, alongside Leah Edgerton, Tobias Leenaert, Oscar Horta, and Jens Tuider (moderator), on whether animal advocates should focus on veganism or anti-speciesism (I’ve outlined my own view here). In my opinion, the discussion went well, not least because there was a sense of a shared underlying goal among the panelists, as well as a high level of intellectual openness, humility, and friendliness.

Unfortunately, yet predictably, the limited time available for each person to speak in such a panel discussion meant that I didn’t get to make half of the points I wanted to. And given that I had these unshared points written down already, it seemed worthwhile to publish them here for everyone to read.

Main Points: Scale and Receptivity

Two main points in favor of anti-speciesist advocacy that I did get to make, albeit briefly, have to do with scale and receptivity. In terms of scale, anti-speciesist advocacy is better than vegan advocacy, as well as other forms of advocacy that focus only on beings exploited by humans, in that it pertains to all non-human animals, including those who live in nature.

At an intuitive level, this may seem like a small point in favor of anti-speciesist advocacy. “+1 to anti-speciesist advocacy for being better in terms of scale.” Yet to think in this way is to fail to appreciate the actual numbers. Just as the much greater number of “farm animals” compared to the number of “pets” is a huge rather than small point in favor of focusing on the former rather than the latter in our advocacy, the much greater number of beings that anti-speciesist advocacy pertains to is an extremely significant point in its favor.

This analogy actually understates the disparity in numbers, as there are less than a hundred times as many “farm animals” as there are “pets”, while the number of wild animals is about a thousand times greater than the number of “farm animals”. A thousand times is a lot, and yet this is only counting vertebrates; the number is much greater if we include invertebrates in our considerations as well, as we should. In other words, if we include invertebrates in our considerations, it becomes clear that the analogy to the ratio between “farm animals” and “pets” is actually a strong understatement. Yet our intuitions have a hard time appreciating such big numbers. Especially when the beings in question live in nature.

Thus, in terms of scale, the actions of many aspiring effective animal advocates may be more akin to donations to local animal shelters than they would like to think. This is not surprising. We humans are notorious group thinkers, and the animal movement has traditionally focused only on beings exploited by humans. Consequently, we should expect this history to bias us strongly toward that focus (objections such as “we should focus on beings exploited by humans first” may be found answered here and here).

The other main point in favor of anti-speciesist advocacy has to do with people’s receptivity toward such advocacy. In light of the above, one may think “sure, anti-speciesist advocacy is best in terms of scale, but will people be receptive to such advocacy? Isn’t it too abstract?”

This is an empirical question, and more research on it is sorely needed. Yet there are at least tentative reasons for thinking that people are in fact receptive to such advocacy, perhaps even more so than toward most other forms of advocacy. One line of evidence comes from Oscar Horta, who has delivered talks on speciesism and conducted surveys after these talks, which suggested that, surprisingly, “most people who attended these talks accepted the arguments against speciesism.” Horta made more interesting findings than this, including that a focus on speciesism may be the best way to promote veganism, yet given that I have already reported on some of these findings elsewhere, and linked to his own summary of the findings, I shall not delve further into it here.

Another line of evidence comes from a study conducted by Vegan Outreach in 2016, in which they tested four different booklets against each other, one of which focused on the case against speciesism (another one was centered around a “reduce your consumption” message, another on the harms that “farm animals” suffer), and then examined which of them led to the greatest reduction in consumption of “animal products”. The results, in a nutshell, were that all the booklets caused a significant reduction in such consumption among readers, and that the booklet that focused on speciesism did the best of all the booklets, although the difference was not statistically significant.

In light of this (admittedly limited) data, we have reasons to think that, even if our only goal were to make people reduce their consumption of “animal products”, focusing on the case against speciesism is at least roughly as good as other, more traditional forms of advocacy.

And yet such a narrow focus cannot be defended. As I also argued during the panel discussion, we have an unfortunate tendency in our movement to view “total consumption of animal products” as a good measure of the quality of the (non-human) sentient condition on the planet, or at least of “how good we’re doing”. It is not. It only says something about a tiny fraction of the non-human beings on the planet, and we cannot defend excluding the rest, i.e. wild animals, in our considerations.

In conclusion, when we combine these two considerations — a much greater scope in terms of the number of beings our advocacy pertains to, as well as a level of receptivity toward anti-speciesist advocacy that seems at least as good as that of other forms of advocacy — we seem to have good reason to focus on anti-speciesist advocacy. And if we then factor in the neglectedness of such advocacy compared to the forms of advocacy and tactics we have traditionally been pursuing, including technological innovations such as in vitro meat, which has millions of US dollars in funding, the case becomes stronger still.

Objections Against Anti-Speciesist Advocacy

But What About the Tractability of the Problem of Suffering in Nature?

While it is true that anti-speciesist advocacy seems optimal in terms of scale because it also includes wild-animal suffering, one may object that the tractability of suffering in nature has been left out of the picture in this analysis.

In response, one can say that, given that the number of wild animals is more than a thousand times that of “farmed animals”, it would seem that the tractability of farm-animal suffering would have to be more than a thousand times greater (to the extent we can meaningfully say such a thing) than the tractability of wild-animal suffering in order for it to make sense to focus almost exclusively on the former (as the animal movement currently does). I do not think that is a reasonable view.

Moreover, the preceding framing makes it appear as though we must choose one over the other. Yet this is a false choice. Anti-speciesist advocacy defends both “farmed” and “wild” animals, and, as seen above, it may be as successful with regard to the former as other forms of advocacy. Again, in light of the notes on receptivity above, one could make a case that we should focus on anti-speciesist advocacy even if we only cared about the wrongs done to beings exploited by humans.

Similarly, even if there were a strong conflict between focusing on “wild” versus “farm” animals, and even if suffering in nature indeed were a thousand times as intractable as suffering caused by direct human exploitation, the much greater neglectedness of wild-animal suffering would still count as a strong reason in favor of doing advocacy that pertains to such suffering, as anti-speciesist advocacy does.

I Don’t Think Wild Animals Have Net Negative Lives

Opposing discrimination against individuals in nature in general, and defending the claim that we should help them to the extent we can in particular, does not rest on the claim that such beings live net negative lives, any more than the claim that we should not discriminate against other human individuals, and help them when we can, rests on the claim that such humans have net negative lives.

(That being said, I have made a theoretical case for wildlife anti-natalism here, in which I argue that merely applying a non-speciesist position on procreative ethics implies that we should, in theory/if we can keep other things equal, prevent the births of the vast majority of non-human individuals in nature. More than that, I think we do tend to significantly underestimate how bad most lives in nature in fact are.)

Another point I would make in response to this claim is that even on the conservative assumption that only one in ten non-human individuals in nature have lives as bad as the average non-human individual cursed to live out their life on a factory farm, the big difference in terms of the number of beings in nature versus on factory farms still implies that there is more than a hundred times as many non-human beings living very bad lives in nature than there is on factory farms, meaning that even given such a relatively small “concentration of suffering” in nature, the greatest opportunity for reducing the most total suffering still lies here.

Isn’t Anti-Speciesism too Abstract?

More specifically: don’t we risk turning people off by seeming to claim that, say, a mosquito has the same moral value as an elephant?

I would make a few distinct points in response to this objection. First, to the extent this is a problem, we can say that anti-speciesism does not imply that all beings should be prioritized equally, just as total opposition to discrimination within the human species does not imply that, say, a human fetus has the same moral value as an adult human individual. The specific traits of a being do matter, and anti-speciesism does not demand us to overlook these differences, but rather to prioritize equal interests equally.

Second, I would argue that, to the extent anti-speciesism promotes more concern for smaller beings compared to other forms of advocacy, this is actually one of its main strengths rather than a weakness, as we generally underestimate the moral value of small beings. One way to see this is to consider the numbers. If we take fish, for instance, it is estimated that there are 10,000 times as many fish on the planet as there are humans, yet fish do not tend to weigh correspondingly strongly on our moral scale, even among animal advocates.

And if we consider invertebrates, our focus seems even more misaligned still, as it is estimated that there are about ten quintillion insects on the planet, ten to the power of nineteen, and yet we fail to take them seriously in moral terms for the most part. One might then object that the number of beings is not a good measure of moral value. Rather, one may argue, we should look at the total number of neurons for a better measure. Yet even if we adopt this as a proxy for moral value, the moral weight of the insect realm still appears staggering, as there are, on a rough estimate at least, a hundred times more insect neurons on the planet than there are human neurons.

(I am not claiming that “number of neurons” is a perfect proxy for moral value by any means, but merely that no matter which of these simple measures we use, we appear to underestimate small beings a lot; Brian Tomasik’s Is Brain Size Morally Relevant? is quite apropos here, although I should note that I disagree with his view of consciousness.)

Why we underestimate smaller beings is a question worth pondering, I think, and I believe we can readily identify at least three reasons. First, small beings, such as fish and insects, tend to be more numerous, which makes greater moral concern for them inconvenient, and we are generally biased against inconvenient updates. Second, smaller beings are generally very different from us in terms of what their bodies look like, which makes it more difficult to empathize with them, even disregarding the size difference. For instance, feeling empathy for a chimpanzee-sized insect or fish seems more challenging than feeling it for a chimpanzee. Third, the size difference itself seems likely to make us more biased against smaller beings as well. Compare the difficulty of feeling compassion for a normal-sized chimpanzee versus feeling it for an ant-sized chimpanzee. Or for a lobster versus an ant; the latter reportedly has more than twice as many neurons as the former.

Another distinct point I would make in relation to this objection is that the case against speciesism is very similar, in terms of its form, to the case against racism, and most people seem to accept the latter today, implying that there may be much ready potential we can tap into here. The argument against racism does not seem too intellectually advanced for most people, which provides an additional reason to question the intuitive assumption that the case against speciesism necessarily must be too advanced or abstract for most people to follow it (along with the non-peer-reviewed studies cited above that tentatively suggest the same). More than that, the philosophical case against speciesism also happens to be exceptionally strong, much stronger than we animal advocates tend to realize — the literature that argues in favor of speciesism is surprisingly thin and weak — and I think we ignore this strength at our peril. We have a powerful tool at our disposal that we refuse to employ.

Anti-Speciesism Is Often Better than (Naive) Consequentialist Calculations

If one is a wannabe consequentialist rationalist, it is easy to be misguided about where much of our moral wisdom comes from, by imagining that we have gained it via clever deductive consequentialist analyses. Yet for the most part, this is not the case. Our rejection of racism today, for instance, is mostly due to cultural evolution, including lessons from history, that has accumulated gradually; it has not primarily been due to consequentialist arguments (to the extent arguments have played a crucial role, it seems to me that they have rather rested on consistency). As a result, we have now arrived upon a moral wisdom that is deeper, I believe, than what a simple chain of consequentialist reasoning could have readily produced prior to this cultural change (after all, how would you make a solid consequentialist case that human slavery is wrong? It is not easy. And if you can, would it apply equally to the property status of non-human individuals? If not, why?).

And I think the same applies to anti-speciesism: it tends to be wiser than naive consequentialist analyses. It provides us with a free download of the full package of the moral progress we have made over the last few centuries with respect to human individuals, ready for us apply to non-human individuals by simply using the heuristic “what would we do if they were human?” With this package installed, we can quickly gain wise views on many ethical issues pertaining to non-human beings, including veganism and “happy meat” — it provides a clear case for and against them respectively. One could be forced to spend a long time arguing for these conclusions otherwise, if one were to insist on employing directly consequentialist arguments, even though these conclusions arguably are what a complete consequentialist analysis would recommend (I believe Brian Tomasik would mostly disagree, although he would do so for complicated reasons).

New Information: Have We Updated Sufficiently?

Something I think we should be wary of is when we build up our views on a given issue over a long period of time, and then encounter a new piece of information that makes us change our outlook completely, yet without properly updating our views and attitudes.

To be more concrete: I think many animal advocates have spent a lot of time thinking hard about how to best advocate for non-human animals so as to reduce their suffering as much as possible. Unfortunately, what they have been thinking hard about has chiefly concerned what we should do in order to reduce the suffering of non-human beings exploited by humans, and they have then built up their preferred strategy for advocating for non-human individuals based on this outlook. A positive thing that has then happened is that they have become convinced of the importance of wild-animal suffering.

This has changed the outlook of these advocates in some ways, yet it seems to me that their preferred strategy in terms of advocacy has remained suspiciously unchanged, which should give them pause. We have had our minds expanded by this piece of information that changes everything: the vast majority of beings is not found in the realm we have been focusing on for all these years. Yet the ideal form of advocacy somehow remains largely the same as before we came upon this information.

In conclusion, I would encourage all animal advocates to reflect on whether they have properly factored in the importance of wild-animal suffering in their current view of the best advocacy strategies and tactics. As far as I can tell, virtually none of us have.

See also Ten Biases Against Prioritizing Wild-Animal Suffering.

Ontological Possibilities and the Meaningfulness of Ethics

First written: Sep. 2017. Last update: Dec. 2025.

Are there different possible outcomes given the present state of the universe? One might think that much depends on our answer to this question. For example, if there are no alternative possible futures given the present state of the universe, one might think that ethics and efforts to improve the world would cease to make sense.

An Objection Against the Meaningfulness of Ethics

We can define “global ontological possibilities” as alternative possibilities that could result from the same state of the universe as a whole. Since alternative possibilities in a strong sense seem crucial to ethical deliberation, one might assume that global ontological possibilities are necessary for ethics to get off the ground, and indeed for engagement in ethical decision-making and action to make sense. On this assumption, one could argue that ethics does not make sense due to the non-existence of global ontological possibilities.

To be sure, the assumption that ethics requires global ontological possibilities is highly controversial. For example, one may hold that we can have genuine ontological possibilities at a relative level even if there are no global ontological possibilities, and hold that ethics is meaningful given such relative possibilities. Or one could maintain that purely epistemic or ex-ante possibilities are enough for ethics to make sense.

Yet my goal in this essay is not to question the assumption above. Instead, I will argue that even if one thinks global ontological possibilities are required for ethics to make sense, one cannot reasonably reject the meaningfulness of ethics based on the claim that such possibilities do not exist.

Key Premise: Humility Is Warranted

We do not know whether global ontological possibilities exist. Given our limited understanding of the fundamental nature of reality, it seems reasonable to maintain a degree of humility on this question. Indeed, even if we have reasons to believe that possibilities of this kind most likely do not exist, it still seems overconfident to assign more than, say, a 99.9 percent probability to their non-existence.

Note that the exact probability we assign to the potential existence of global ontological possibilities is not important. The point here is simply that, from our epistemic vantage point, there is a non-zero probability that global ontological possibilities exist.

Why the Objection Fails

The probabilistic premise above implies that it is unwarranted to reject ethics based on the supposed non-existence of global ontological possibilities.

To see why, consider the claim that risks of very bad future outcomes are low. Even if this claim were true, it would not follow that such risks can reasonably be dismissed. After all, when the stakes are sufficiently high, it is not reasonable to dismiss low probabilities. And when we are discussing the meaningfulness of ethics, the stakes could in some sense not be greater, since what is at issue is whether there are any stakes at all. Given such total stakes, even extremely low probabilities are worth taking seriously. Therefore, the mere epistemic possibility that global ontological possibilities are real is sufficient for undermining the above-mentioned objection against the meaningfulness of ethics.

Moreover, when considering the conceivable scenarios before us, an asymmetry emerges in support of the same conclusion. If global ontological possibilities are real, and if ethical action roughly amounts to realizing the best of these possibilities — or at least avoiding the worst — we seem to have good reason to try to realize the better over the worse of these possibilities. On the other hand, if such possibilities are not real, trying to create a better world appears to have no downside in terms of which global ontological possibilities end up getting realized. Thus, when considering these two horns, we seem to have a strong reason in favor of trying to create a better world, and no reason against it.

In sum, even if we grant the controversial premise that ethics requires global ontological possibilities, it does not follow that ethics is meaningless. Given our uncertainty about whether such possibilities exist, and given what is at stake, we have good reason to pursue ethical deliberation and action regardless.

Response to a Conversation on “Intelligence”

I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.

The statement that gave rise to the conversation was apparently this:

> Intelligence is the only advantage we have over lions.

My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.

I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.

There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you’re also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)

The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.

And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.

For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allows us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.

Magnus Vinding seems to think that because humans do all the cool stuff “only because of tools,” innate intelligence differences are not very consequential.

I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.

It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.

This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.

I wrote a section on von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:

“If you say why not bomb them tomorrow, I say why not today? If you say today at five o’ clock, I say why not one o’ clock?”

To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).

One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it’s a collection of modules that happen to correlate in humans for some reason that I don’t yet understand.

I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)

> The “threshold” between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).

So there’s a possible world full of lion-tailored tools where the lions are beating our asses all day?

Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.

Further down you acknowledge that the difference is “or maybe tailored to individuals with superior cognitive ability” – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.

I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.

Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.

I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.

Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual employment of the word “intelligence” in general, in my view.

My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).

First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.

Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant with respect to the task of taking over the world? And how might they be likely to be created? Relevant questions to clarify, it seems to me.

Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable, and thus be closer to notional limits of “capabilities”, the more technologically advanced society is; there would be more technology mastered by others that an AI system would need to learn and master in order to take over; and finally, society may learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it.

This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2017/07/response-to-conversation-on-intelligence.html

Blog at WordPress.com.

Up ↑