Consciousness: Orthogonal or Crucial?

The following is an excerpt from my book Reflections on Intelligence (2016/2024).


A question that is often considered open, sometimes even irrelevant, when it comes to “AGIs” and “superintelligences” is whether such entities would be conscious. Here is Nick Bostrom expressing such a sentiment:

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences. (Bostrom, 2012, “Definition of ‘superintelligence’”)

Yet this is hardly true. If a system is “more capable than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills”, the question of consciousness is highly relevant. Consciousness is integral to much of what we do and excel at, and thus if an entity is not conscious, it cannot outperform the best humans “in practically every field”, especially not in “general wisdom” and “scientific creativity”. Let us look at these in turn.

General Wisdom

A core aspect of “general wisdom” is to be wise about ethical issues. Yet being wise about ethical issues requires that one can consider and evaluate questions like the following in an informed manner:

  • Is there anything about the experience of suffering that makes its reduction a moral priority
  • Does anything about the experience of suffering justify the claim that reducing suffering has greater moral priority than increasing happiness (for the already happy)?
  • Is there anything about states of extreme suffering that make their reduction an overriding moral priority?

It seems that one would have to be conscious in order to explore and answer such questions in an informed way. That is, one would have to know what such experiences are like in order to understand their experiential properties and significance. Knowing what a term like “suffering” refers to — i.e. knowing what actual experiences of suffering are like — is thus crucial for informed ethical reflection.

The same point holds true about other areas of philosophy that bear on wisdom, such as the philosophy of mind: without knowing what it is like to have a conscious mind, one cannot contribute much to the discussion about what it is like to have one and to the exploration of different modes of consciousness. Indeed, an unconscious entity has no genuine understanding about what the issue of consciousness is even about in the first place (Pearce, 2012a; 2012b).

So both in ethics and in the philosophy of mind, an unconscious entity would be less than clueless about many of the deepest questions at hand. If an entity not only fails to surpass humans in these areas, but fails to even have the slightest clue about what we are talking about, it hardly surpasses the best humans in practically every field. After all, questions about the phenomenology of consciousness are also relevant to many other fields, including psychology, epistemology, and ontology.

In short, experiencing and reasoning about consciousness is a key part of “human abilities”, and hence an entity that is unable to do this cannot be claimed to outperform humans in the most important, much less all, human abilities (see also Pearce, 2012a; 2012b).

Scientific Creativity

Another ability mentioned above that an unconscious entity could supposedly outdo humans at is scientific creativity. Yet scientific creativity must relate to all fields of knowledge, including the science of the conscious mind itself. This is also a part of the natural world, and a most relevant one at that.

Experiencing and accurately reporting what a given state of consciousness is like is essential for the science of mind, yet an unconscious entity obviously cannot do such a thing, as there is no experience it can report from. It cannot display any genuine scientific creativity, or even produce mere observations, in the direct exploration of consciousness.

Compassionate Free Speech

Two loose currents appear to be in opposition in today’s culture. One is animated by a strong insistence on empathy and compassion as core values, the other by a strong insistence on free speech as a core value. These two currents are often portrayed as though they must necessarily be in conflict. I think this is a mistake.

To be sure, the two values described above can be in tension, and none of them strictly imply the other. But it is possible to reconcile them in a refined and elegant synthesis. That, I submit, is what we should be aiming for. A synthesis of two vital and mutually reinforcing values.

Definitions and outline

It is crucial to distinguish 1) social and ethical norms, and 2) state-enforced laws. The argument I make here pertains to the first level. That is, I am arguing that we should aim to observe and promote ethical norms of compassion and open conversation, respectively.

What do I mean by these terms? Compassion is commonly defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. I here use the term in a broader sense that also covers related virtues such as understanding, charitable interpretation, and kindness.

By norms of open conversation, or free expression, I mean norms that enable people to express their honest views openly, even when these views are controversial and uncomfortable. These norms do not entail that speech should be wholly unrestricted; after all, virtually everyone agrees that defamation and incitements to commit severe crimes should be illegal, as they commonly are.

My view is that we should roughly think of these two broad values as prima facie duties: we should generally strive to observe norms of compassion and open conversation, except in (rare) cases where other duties or virtues override these norms.

Below is a short defense of these two respective values, highlighting their importance in their own right. This is followed by a case that these values are not only compatible, but indeed strongly complementary. Finally, I explore what I see as some of the causes of our current state of polarization, and suggest five heuristics that might be useful going forward.

Brief defenses

Free speech

There are many strong arguments in favor of free speech. A famous collection of such arguments is On Liberty (1859) by John Stuart Mill, whose case for free speech is primarily based on the harm principle: the only reason power can legitimately be exercised over any individual against their will is to prevent harm to others.

This principle is intuitively compelling, although it leaves it unspecified what exactly counts as a harm to others. That is perhaps the main crux in discussions about free speech, and this alone could provide an argument in favor of free and open expression. For how can we clarify what should count as sufficient harm to others to justify the exercise of power if not through open discussion?

A necessary corrective to biased, fallible minds

Another important argument Mill makes in favor of free speech is based not merely on the rights of the speaker, but in equal part on the rights of the would-be listeners, who are also robbed by the suppression of free expression:

[T]he peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.

In essence, Mill argues that, contrary to the annoyance we may instinctively feel, we should in fact be grateful for having our cherished views challenged, not least because it can help clarify and update our views.

Today, Mill’s argument can be further bolstered by a host of well-documented psychological biases. We now know that we are all vulnerable to confirmation bias, the bandwagon effect, groupthink, etc. These biases make it all too easy for us to deceive ourselves into thinking that we already possess the whole truth, although we most certainly do not. Consequently, if we want to hold reasonable beliefs, we should welcome and appreciate those who challenge the pitfalls of our groupish minds — pitfalls that we may otherwise be content to embrace.

After all, how can we know that our attempts to protect ourselves from hearing views that we dislike are not essentially unconscious attempts to protect our own confirmation bias? Free and open conversation is our best debiasing tool.

Strategic reasons

An altogether different argument in favor of honoring principles of free speech is that a failure to do so is strategically unwise. Indeed, as free-speech defender Noam Chomsky argues, there are several reasons to consider the suppression of free speech a tactical error if we are trying to create a better society.

First, reinforcing a norm of suppressing speech can have the unintended consequence of leading all sides, and perhaps eventually governments, to consider it increasingly legitimate to suppress certain forms of speech. “If they can suppress speech, why shouldn’t we?” The effects of such a regression would be worst for those who lack power.

Second, seeking to suppress speech is likely to backfire and to strengthen the other side, by making that side look more appealing than it in fact is — the suppressed becomes alluring — and by making the side that seeks to suppress speech look unreasonable, as though they are unable to muster a defense of their views.

When people try to make us do something, we tend to react negatively and to distance ourselves, even if we agreed with them from the outset (cf. psychological reactance). This is another strong reason against suppressing free expression, and against giving people the impression that they are not allowed to discuss or think certain things. It is human nature to react by asserting one’s freedom in defiance, even if it means voting for a president that one would otherwise have voted against. (See also Cialdini, 1984/2021, ch. 7.)

(Weak norms of free expression are thus a democratic problem in more than one way: it can keep citizens from voting in accordance with their ideal preferences both by making them ill-informed and by provoking votes of defiance.)

Steven Pinker has made a related point: if we place certain issues beyond the bounds of acceptable discourse, many people are likely to seek out discussion of these issues from unsavory sources, which can in turn put people on a path toward extreme and uncompassionate views. This parallels one of the main arguments made against the prevailing drug laws of today: such restrictions merely push the whole business into an underground market where people get dangerously polluted goods.

As Ayishat Akanbi eloquently put it (paraphrased slightly): if we suppress ideas, they will “operate with insidious undertones”, and we in effect “push people into the arms of extremism.”

Compassion

I will let my defense of compassion be even briefer still, as I have already made an elaborate defense of it in my book Suffering-Focused Ethics: Defense and Implications.

The short case is this: Suffering, especially the most intense suffering, truly matters. It is truly bad and truly worth preventing. Consequently, a desire to alleviate intense suffering is simply the most sensible response. Only a failure to connect with the reality of suffering can leave us apathetic. That is the simplest and foremost reason why compassion is of paramount importance.

Another reason to be compassionate, including in the broader sense of being kind and understanding, is that such an attitude has great instrumental benefits at the level of our communication and relations: it fosters trust and cooperation, which in turn enables win-win interactions.

However, to say that we should be compassionate is not to say that we should be game-theoretically naive in the sense of kindly allowing others to walk all over us. Compassion is wholly compatible with, and indeed mandates, tit for tat and assertiveness in the face of transgressions.

Lastly, it is worth emphasizing that compassion and empathy are not partisan values. Empathy is a human universal, and compassion has been considered a virtue in all major world traditions, as well as in most political movements, including political conservatism. Indeed, people of all political orientations score high on the harm/care dimension in Jonathan Haidt’s moral foundations framework. It really is a value on which people show uniquely wide agreement, at least on reflection. When they are not on Twitter.

Compassion and free speech: Complementary values

As noted above, the two values I defend here do not strictly imply each other, at least in some purely theoretical sense. But they are strongly complementary in many regards.

How free speech aids compassion

Compassion and the compassionate project can be aided by free speech in various ways. For example, to alleviate and prevent suffering effectively with our limited resources, we need to be able to discuss controversial ideas. We need to be able to discuss and measure different values and priorities against each other, including values that many people consider sacred and hence offensive to discuss.

As a case in point, in my book Suffering-Focused Ethics, I defend the moral primacy of reducing extreme suffering, even above other values that many people may consider sacred, and I further discuss the difficult question of which causes we should prioritize so as to best reduce extreme suffering. My arguments will no doubt be deeply offensive and infuriating to many, and I believe a substantial number of people would like to see my ideas suppressed if they could. This is not, of course, unique to my views: all treatises and positions on ethics are bound to be deemed too offensive and too dangerous by some.

This highlights the importance of free speech for ethics in general, and for the project of reducing suffering in particular. To conduct this most difficult conversation about what matters and what our priorities should be, we need a culture that allows, indeed promotes, this conversation — not a culture that stifles it. People who want to reduce suffering should thus have a strong interest in preserving and advancing free speech norms.

Another way in which free speech aids compassion is that, put simply, encouraging the free expression of, and listening to, each others’ underlying grievances can help us build mutual understanding, and in turn enable us to address our problems in cooperative ways. As Noam Chomsky notes in the context of hateful ideologies:

If you have a festering sore, the cure is not to irritate it, but to find out what its roots are and where it comes from, and to deal with those. Racist and other such speech is a festering sore. By silencing it, you simply amplify its appeal, and even lend it a veneer of respectability, as in fact we’ve seen very clearly in the last couple of years. And what has to be done, plainly, is to confront it, and to ask where it comes from, and to try to deal with the roots of such ideas. That’s the way to extirpate the ugliness and evil that lies behind such phenomena.

Human rights activist Deeyah Khan similarly argues that a root source of white supremacy is often a sense of fear and lack of opportunity, not inherent evil or apathy. She contends that the best solution to extremist ideologues is generally to engage in conversation and to seek to understand, not to shut down the conversation. (I recommend watching Khan’s documentary White Right: Meeting the Enemy.)

So while compassion per se does not directly imply free speech at some purely theoretical level, I would argue that a sophisticated and fully extrapolated version of compassion and the compassionate project does necessitate strong norms of free and open expression at the practical level.

How compassion aids free speech

One of the ways in which compassion can aid open conversation is exemplified in Deeyah Khan’s documentary mentioned above: she sits down and listens to white nationalists, seeking to understand them with compassion, which allows them to identify and express their own underlying issues, such as feelings of fear, vulnerability, and unworthiness. Such things can be difficult to share in apathetic and antagonistic environments, be they the macho ingroup or the angry outgroup. “Fuck you, racist” does not quite invite a response of “I’m afraid and hurting” as much as does, “How are you feeling, and what really motivates you?” On the contrary, it probably just serves to reinforce the facade of the pain.

We may not usually think of conditions that further the sharing of our underlying worries and vulnerabilities as a matter of free speech, perhaps because we all help perpetuate norms that suppress honesty about these things. But if free speech norms are essentially about enabling us to dare express our honest perspectives, then our de facto suppression of our inmost worries and vulnerabilities is indeed a free speech issue — and a rather consequential one at that (as I think Khan’s White Right makes clear). Compassion may well be the best remedy we have to our truth-subduing culture of suppressing our core worries and vulnerabilities.

A related way in which compassion, specifically the virtue of charitable interpretation, is important for free speech is, quite simply, that we suffocate free speech in its absence. If people hold back from expressing a nuanced view because they know that they will be strawmanned and vilely attacked based on bad-faith misinterpretations, then the state of free expression will be poor indeed.

In contrast, free expression will flourish when we do the opposite: when everyone engages with the strongest version of their opponents’ view — i.e. steelmans it — so that people feel positively motivated to present nuanced views and arguments in the expectation of being critiqued in good faith.

That, needless to say, is far from the state we are currently in.

Why we fail so spectacularly today

We are currently witnessing a primitive tribal dynamic exacerbated by the fact that we inhabit a treacherous environment to which we are not yet adapted, neither biologically nor culturally. I am speaking, of course, of the environment of screen-to-screen interaction.

Yet we should be clear that values and politics were never easy spheres to navigate in the first place. They have always been minefields. Politics is a notorious mind-killer for deep evolutionary reasons, and our political behavior is often more about signaling our group affiliations than it is about creating good policies. This is true not just of the “other side”; it is true of all of us, though we remain largely unaware of and self-deceived about it.

Thus, our predicament is that most of us care deeply about loyalty signaling, and such signaling has now become dangerously inflated. Moreover, we often use beliefs, ostensibly all about tracking reality, as ornaments that signal our group loyalty.

A hostage crisis instilling false assumptions

The two loose social currents I mentioned in the introduction can, I submit, be understood in this light. Specifically, values centered on empathy and compassion have become an ornament of sorts that signals loyalty to one side, while values centered on free speech have become a loyalty signal to another side. To be clear, I am not saying that these values are merely ornaments; they clearly are not. A value can be an ornament displayed with pride and be sincerely held at the same time. Yet our natural inclination to signal group loyalty can lead us to only express our support for one of these values, and to underemphasize the “opposing” value, even if we in fact do favor it.

In this way, the values of compassion and free speech have to some extent become hostages in a primitive tribal game, which in turn gives the false impression that there must be some deep conflict between these values, and that people must choose one or the other, as opposed to these values being, as I have argued, strongly complementary (with occasional and comparatively minor tensions).

The upshot is that supporters of free speech may feel nudged to display insensitivity in order to signal their loyalty and consistency, while supporters of anti-discrimination may feel nudged to oppose free speech.

Uncharitable claims beyond belief

A sad feature of this dynamic, and something that helps fuel it further, is how incredibly uncharitable the outer flanks of these two tribal currents are toward the other side.

“The PC-policing SJWs don’t care about the hard facts and just want to suppress them.”

“The free speech bros don’t care about minorities and just want to oppress them.”

To say that people are failing to steelman here would be quite the understatement. Indeed, this barely even qualifies as a strawman. It is more like the scream-man version of the other side: the worst, most scary version of the other side’s position that one could come up with. And this scream-man is repeatedly rehearsed in the partitioned echo halls of Twitter to the extent that people start believing these preposterously uncharitable narratives about the Scary Other.

It is a tragedy of the commons phenomenon: people are gleefully rewarded in their ingroup each time they promulgate the scream-man representation of the other side, and so it feels right to do so for individuals in these respective groups. But in the bigger picture, it just leaves everyone much worse off.

Distributions and the importance of self-criticism

To be sure, there are serious problems with significant numbers of people who conform too closely to the cartoon descriptions above. But a crucial point is that we must think in terms of statistical distributions. Specifically, the most loud-mouthed and scary two percent of the “other side” — a minority that tends to get a disproportionate amount of attention — should not be taken to represent everyone on that “side”, let alone its most reasonable representatives.

That being said, it is also true that many people on both sides tend to be bad at criticizing the harmful tendencies of their own “team”. There does indeed appear to be a tendency among certain defenders of free speech to fail to criticize and condemn those who discriminate against minorities. Likewise, there really does seem to be a tendency among certain progressives to fail to criticize and condemn those who suppress discussions of contentious issues.

This failure to speak out against the worst elements of one’s “own side” with sufficient force can create the false impression that most people on “our” side actually agree with these worst elements. That is how damning it is that we fail to criticize the transparent excesses of our ingroup in clear terms.

At cross-purposes

A problem with our failure to be charitable and to think in terms of distributions is that people end up talking past each other: both sides tend to criticize a strawman version of the other side based on the rabid tail-end elements of that side, which most people on the other side really do disagree with (although they may, as mentioned above, fail to express this disagreement with sufficient clarity).

This frequently results in debates with two sides that are in large part talking at cross-purposes: one side mostly defends free speech, while the other side mostly defends anti-discrimination, as though these were necessarily in great conflict. The failure to explore the compatibility and mutual complementarity of these values is striking.

The perils of screen-to-screen interaction

As noted above, our current mode of interaction only aggravates our political imbecility. When engaged in face-to-face interaction, we naturally relate to and empathize with the person before us, and we have a strong interest in keeping our interaction cordial so as to prevent it from escalating into conflict.

In screen-to-screen interaction, by contrast, our circuits for interpersonal interaction are all but inert, as we find ourselves shielded off from salient feedback and danger. Social media is road rage writ large. It is a road rage that renders it extra difficult to be charitable, and which renders it far more tempting to paint the outgroup in a bad light than it could ever be in a face-to-face environment, where preposterous strawmen would be called out and challenged in real time.

As a study on political polarization on Twitter put it:

Many messages contain sentiments more extreme than you would expect to encounter in face-to-face interactions, and the content is frequently disparaging of the identities and views associated with users across the partisan divide.

How can we reduce these unfortunate tendencies? The age of social media calls for new norms of communication.

Better norms for screen communication

Human culture has adapted to technological changes before, and it seems that we have no choice but to do the same today, in the face of our current state of cultural maladaptation. The following are five heuristics, or norms, that I think are likely to be useful in this regard.

1. The face-to-face heuristic

In light of the above, it seems sensible to adopt the precept of communicating online in roughly the same way we would communicate face-to-face. Our skills in face-to-face interaction have deep biological and cultural bases, and hence this heuristic is a cheap way to tap into a well-honed toolbox for functional human communication.

One effect of employing this heuristic will likely be a reduction of sarcastic and taunting comments. Such comments are rarely useful for taking our conversations to the next level, as we tend to realize face-to-face.

2. The nuance heuristic

As I argue in my defense of nuance, much of the tension that we see today could likely be lessened greatly if we adopted more nuanced perspectives, such as by acknowledging grains of truth in different viewpoints, and by representing beliefs in terms of graded credences rather than posturing with overconfident all-or-nothing certainties.

3. The steelman heuristic

I have already mentioned this, but it can hardly be stressed enough: we must strive to be charitable and to steelman the views of our opponents, especially since our road-rage-behind-the-screen predicament makes it easier than ever to do the opposite.

Whenever we summarize and criticize the views of the other side, we should stop and ask ourselves: Is this really the most honest statement of their views that I can muster, let alone the strongest one? If I think their view is painfully stupid, do I really fully understand it? Do I really know what it entails and the best arguments that support it?

4. Compassion for the outgroup

As noted above, compassion really is a consensus value, if ever there were one. The disagreement mostly arises when it comes to which individuals we should extend our compassion to. Both of the notional “sides”, or social currents, described here suffer from selective compassion: they generally fail to show sufficient compassion and respect for the other side, which renders productive conversation difficult. And this point needs to be stressed with unique fervor today, as screens are an all too powerful and insidious catalyst for outgroup apathy.

5. Criticizing the ingroup

Condemning the excesses of one’s (vaguely associated) ingroup is also uniquely important today. Why? Because we now see large numbers of people behaving badly on social media, and our intuitions are statistically illiterate: we do not intuitively understand how a faction endorsing a certain view or behavior can simultaneously be large in number and constitute but a small minority of a given group. The world is big, and we mostly do not understand that (at an intuitive level).

Only by publicly countering the excesses of our “ingroup” can we make it clear to the other side — and perhaps also to our own side — that the extremists truly are a disapproved minority. Such ingroup criticism seems paramount if we are to mitigate the ongoing trend of polarization.

Conclusion

We have created a polarized online society in which people can feel pushed toward a needlessly narrow set of values — compassion or free speech, choose one! We are pushed in this way, not by totalitarian laws, but by modes of communication to which we are not yet adapted, and which we are navigating with patently defunct norms.

Norms are often more important than laws. After all, most of us can think of judgments from our peers that would be worse than a minor prison sentence. Hence, totalitarian laws are not required for free expression to be stifled into a de facto draconian state. The notion that harshly punitive norms do not restrict speech in costly ways is naive.

Sure, we should be free to judge others based on the things they say. But just how harshly should we judge people for discussing controversial views? And do we understand the risks and the strategic costs associated with such judgments, let alone the risks of trying to suppress certain viewpoints? If we place ourselves in opposition to free speech, and then give people the ultimatum of siding either with “us” or with “them”, a lot of people are going to choose the other side, even if that side has features they find genuinely worrying.

I have tried to argue that the choice between free speech or compassion is a false one. It really is possible to chart a balanced middle path of a free and compassionate society.

Ten Biases Against Prioritizing Wild-Animal Suffering

I recommend reading the short and related post Why Most People Don’t Care About Wild-Animal Suffering by Ben Davidow.

The aim of this essay is to list some of the reasons why animal advocates and aspiring effective altruists may be biased against prioritizing wild-animal suffering. These biasing factors are, I believe, likely to significantly distort the views and priorities of most people who hold impartial moral views concerned about the suffering of all non-human animals.


Contents

1. Historical momentum and the status quo

2. Emotionally salient footage

3. Perpetrator bias

4. Omission bias

5. Scope neglect

6. Invertebrate neglect

7. Thinking we can have no impact

8. Underestimating public receptivity

9. Overlooking likely future trajectories

10. Long-term nebulousness bias

Either/Or: A false choice


1. Historical momentum and the status quo

The animal rights movement has, historically, been almost exclusively concerned with the protection of non-human animals exploited by humans. Very little attention has been devoted to suffering in nature for natural reasons. And to the extent the issue has been mentioned by philosophers in the past, it has rarely been framed as something that we ought to do something about.

Only in recent decades has the view that wild-animal suffering deserves serious attention in our practical deliberations been defended more explicitly. And the people who have defended this view have, of course, still been a tiny minority among activists concerned about animal suffering, and they have so far had little impact on the focus and activism of the animal movement at large.

This historical background matters greatly, since we humans very much have a social epistemology: we tend to pick up the views of our peers. For example, most people adopt the religion that is most popular in their geographical region, even if it is not the most rational belief system on reflection. And a similar pattern applies to our views in general. It is truly rare for people to think critically and independently.

Thus, if most people concerned about non-human animals — including our own mentors and personal heroes — have focused almost exclusively on the plight of non-human animals exploited by humans, then we are likely to be strongly inclined to do the same, even if this is not the most rational focus on reflection (in terms of how we can have the best impact on the margin).

2. Emotionally salient footage

Closely related to the point above is the fact that footage of suffering farmed animals constitutes almost all of the disturbing footage we see of animal suffering. Whether on social media or in documentary movies about animal rights, the vast majority of the content encountered by the average animal activist shows cows, pigs, and chickens who are suffering at human hands.

Note how unrepresentative this picture is: a great majority of the animal suffering we observe occurs at human hands, although the vast majority of all suffering beings on the planet are found in nature. It is difficult to see how this can give us anything but a skewed sense of what is actually happening on our planet.

Yet not only will most of us have been exposed to far more suffering occurring at human hands, but we probably also tend to see the victims of such suffering with very different eyes compared to how we see the victims of natural processes. When we, as animal activists, see pigs and chickens suffer at human hands, we look at these beings with sympathy. We feel moral outrage. But when we see a being suffer in nature for natural reasons — for example, a baby elephant getting eaten alive — we are probably more hesitant about activating this same sympathy. Sure, we may lament the suffering and feel bad for the victim. But we do not truly see ourselves in the victim’s place. We do not look at the situation with the same moral eyes that cry “this is unacceptable”.

It is difficult to overstate the significance of this point. For while we may like to think of our activism and moral priorities as being animated chiefly by reasoned arguments, the truth is that salient experiences tend to matter just as much, if not more, for our moral motivation. It is one thing to think that wild-animal suffering is important, but it is quite another to feel it. The latter renders action less optional.

If we had only seen more footage of wild-animal suffering, and — most crucially — dared to behold such footage with truly sympathetic eyes, we would probably feel its moral gravity much more clearly, and in turn feel more motivated to address the problem. It seems unlikely that the priorities of the animal movement would be largely the same if more than 99 percent of the horrible footage encountered by animal activists had displayed the suffering of wild animals (which would be roughly representative of the number of wild animals relative to farmed animals).

3. Perpetrator bias

Another relevant bias to control for is the perpetrator bias: we seem to care more about suffering when it is caused by a moral agent who has brought it about by intentional action (Vinding, 2020, 7.7). By extension, we tend to neglect suffering when it is not caused by intentional actions, such as when it occurs in nature for natural reasons. This bias, and its relevance to our appraisals of wild-animal suffering, has been explored in (Tomasik, 2013; Davidow, 2013).

As both Tomasik and Davidow argue, this bias could well be among the main reasons why most people, and indeed most animal advocates, tend to neglect the problem of wild-animal suffering. Our moral psychology is very much set up to track the transgressions of perpetrators, which can leave us relatively unmoved by suffering that involves no perpetrators, even if our reflected view is that all suffering should matter equally. After all, the core programming of our moral cognition does not change instantly just because a few of the modules in our minds have come to endorse a more advanced, impartial view.

4. Omission bias

Some version of the omission bias — our tendency to judge harmful acts of omission more leniently than harmful acts of commission, even when the consequences are the same — may be another reason why people with impartial views give less priority to wild-animal suffering than they ideally should. Our moral psychology is plausibly often motivated to focus on wrongs that we can be perceived to be responsible for, and for which we may be blamed.

Suffering caused by humans is in some sense done by “us”, and hence we may instinctively feel that we are more blameworthy for allowing such suffering to occur compared to allowing the suffering of wild animals. This might in turn incline us toward focusing on the former rather than the latter. Yet from an impartial perspective, this is not a sound reason for prioritizing human-caused suffering over “natural” suffering.

5. Scope neglect

Numbers are commonly invoked as one of the main reasons for focusing on farmed animals. For example, there are about a hundred times as many non-human animals used and killed for food as there are companion animals, and hence we should generally spend our limited resources on helping the former rather than the latter. What is less commonly acknowledged, however, is that a similar thing can be said about wild animals, who, even if we only count vertebrates, outnumber vertebrates used and killed for food at least a thousand times (and perhaps more than 100,000 times).

Such numbers are notoriously difficult for us to internalize in our moral outlook. Our minds were simply not built to feel the significance of several orders of magnitude. Consequently, we have to make an arduous effort to really appreciate the force of this consideration.

The following illustration from Animal Charity Evaluators shows the relative proportion of wild vertebrates to domesticated vertebrates:

number wild animals

6. Invertebrate neglect

Related to, and amplifying, the scope-neglect consideration is our neglect of invertebrate suffering. Not only are domesticated vertebrates outnumbered by wild vertebrates by at least a thousand times, but wild vertebrates are, in turn, outnumbered by wild invertebrates by at least ten thousand times (and perhaps by more than ten million times).
.


Put differently, more than 99.99 percent of all animals are invertebrates, and virtually all of them live in the wild. Taking the suffering of invertebrates into account thus gives us another strong — and widely ignored — reason in favor of prioritizing wild-animal suffering. A
nd in line with the point about the significance of emotionally salient footage, it may be that we need to watch footage of harmed invertebrates in order for us to fully appreciate the weight of this consideration.

7. Thinking we can have no impact

A common objection against focusing on wild-animal suffering is that the problem is intractable — if we could do anything about it, then we should prioritize it, but there just isn’t anything we can do at this point.

This is false in two principal ways. First, we humans already make countless decisions that influence animals in the wild (and we will surely make even more significant such decisions in the future). For example, the environmental policies adopted by our societies already influence large numbers of non-human animals in significant ways, and it would be false to claim that such policies are impossible to influence. After all, environmental groups have already been able to influence such policies to a considerable extent. Sadly, such groups have routinely pushed for policies that are profoundly speciesist and harmful for non-human animals — often with support from animal advocates, which shows how important it is that animal activists do not blindly endorse environmentalist policies, and how important it is that we reflect on the relationship between environmentalist ethics and animal ethics. And, of course, beyond influencing large-scale policy decisions, there are also many interventions we can make on a smaller scale that still help non-human animals in significant ways.

Second, we can help wild animals in indirect ways: by arguing against speciesism and for the importance of taking wild-animal suffering into consideration, as well as by establishing a research field focused on how we can best help wild animals on a large scale. Such indirect work, i.e. work that does not lead to direct interventions in the near term, may be the most important thing we can do at this point, even as our current wildlife policies and direct interventions are already hugely consequential.

So the truth is that there is much we can do at this point to work for a future with fewer harms to wild animals.

8. Underestimating public receptivity

There are reasons to think that animal advocates strongly underestimate public receptivity to the idea that wild-animal suffering matters and is worth reducing (see also what I have written elsewhere concerning the broader public’s receptivity to antispeciesist advocacy).

One reason could be that animal advocates themselves tend to find the idea controversial, and they realize that veganism is already quite controversial to most people. Hence, they reason, if they, as animal advocates, find the idea so controversial, and if most people find mere veganism to be highly controversial, then surely the broader public must find concern for wild-animal suffering extremely controversial.

Yet such an expectation is heavily distorted by the idiosyncratic position in which vegans find themselves. The truth is that most people may well view things the opposite way: veganism is controversial to them because they are currently heavily invested — socially and habit-wise — in non-veganism. By contrast, most people are not heavily invested in non-intervention with respect to wild animals, and thus have little incentive to oppose it.

The following is a relevant quote from Oscar Horta that summarizes his experience of giving talks about the issues of speciesism and wild-animal suffering at various high schools (my own software-assisted translation):

Intervention to help animals is easily accepted
There are many antispeciesist activists who are afraid to defend the idea of helping animals in the wild. Even if these activists totally agree with the idea, they believe that most people will reject it completely, and even consider the idea absurd. However, among the people attending the talks there was a very wide acceptance of the idea. Radical cases of intervention were not raised in the talks, but all the examples presented were well accepted. These included cases of injured, sick or trapped animals being rescued; orphan animal shelters; medical assistance to sick or injured animals; vaccination of wild animals; and provision of food for animals at risk of starvation. In sum, there does not seem to be any reason to be afraid of conveying this idea in talks of this type.

Of course, the claim here is not that everybody, or even most people, will readily agree with the idea of helping wild animals — many will surely resist it strongly. But the same holds true of all advocacy on behalf of non-human animals, and the point is that, contrary to our intuitive expectations, public receptivity to helping non-human animals in nature may in many ways be greater than their receptivity to helping farmed animals (although receptivity toward the latter also appears reasonably high when the issue is framed in terms of institutional change rather than individual consumer change).

9. Overlooking likely future trajectories

As I have noted elsewhere:

Veganism is rising, and there are considerable incentives entirely separate from concern for nonhuman animals to move away from the production of animal “products”. In economic terms, it is inefficient to sustain an animal in order to use her flesh and skin rather than to grow meat and other animal-derived products directly, or replace them with plant-based alternatives. Similarly strong incentives exist in the realm of public health, which animal agriculture threatens by increasing the risks of zoonotic diseases, antibiotic resistant bacteria like MRSA, and cardiovascular disease. These incentives, none of which have anything to do with concern for nonhuman animals per se, could well be pushing humanity toward veganism more powerfully than anything else.

So despite the bleakness of the current situation, there are many incentives that appear to push humanity toward the abolition of animal exploitation, and we may even be moving in that direction faster than most of us expect (this is not, of course, a reason to be complacent about the unspeakable moral atrocity of “animal farming”, but it is something to take into account in our approach to helping future beings as much as we can).

In contrast, there are no corresponding incentives that lead us to help non-human animals in nature, and thus no strong reasons to think that humanity (including environmentalists, sadly) will take the interests of wild animals sufficiently into account if we do not advocate on their behalf.

Advocacy focused on wild animals is already vastly neglected in the animal movement today, and when we consider what the future is likely to look like, the level of priority animal advocates currently devote to the problem of wild-animal suffering seems even more disproportionate still.

10. Long-term nebulousness bias

This last bias is a bit more exotic and applies mostly to so-called longtermist effective altruists. People who focus on improving the long-term future can risk ending up with a rather nebulous sense of how to act and what to prioritize: there are so many hypothetical cause areas to consider, and it is often difficult to find tractable ways to further a given cause. Moreover, since there tends to be little real-world data that can help us make progress on these issues, longtermists are often forced to rely mostly on speculation — which in turn opens the floodgates for overconfidence in such speculations. In other words, focusing on the long-term future can easily lead us to rely far too strongly on untested abstractions, and to pay insufficient attention to real-world data and existing problems.

In this way, a (naive) longtermist focus may lead us to neglect concrete problems that evidently do have long-term relevance, and which we can take clear steps toward addressing today. We neglect such problems not only because most of our attention is devoted to more speculative things, but also because these concrete problems do not seem to resemble the “ultimate thing” that clearly improves the long-term future far better than other, merely decent focus areas. Unfortunately, such an “ultimate thing” is, I would argue, unlikely to ever be found. (And if one thinks one has found it, there are reasons to be skeptical.)

In effect, a naive longtermist focus can lead us to overlook just how promising work to reduce wild-animal suffering in fact is, and how long a list of compelling reasons one can give in its favor: in terms of scale, it vastly dominates all other sources of currently existing suffering; it is, as argued above, a tractable problem where there are fairly concrete and robust ways to make progress; and the problem is likely to exist and be dominant in scale for a long time — centuries, at least.

More than that, work to reduce wild-animal suffering is also likely to have many good flow-through effects. For example, such work is probably among the most promising actions we can take to prevent the spread of animal suffering to space to space, which is one of the least speculative s-risks (i.e. risks of astronomical future suffering). Indeed, there are already people who actively advocate that humanity should spread nature to space, and concrete proposals for how it could be accomplished already exist.

The risk of spreading wild-animal suffering to space appears greater than the risk of spreading factory farming to space, not least in light of the point made in the previous section concerning the incentives and future technologies that are likely to render factory farming obsolete. One may, of course, object that the risks of astronomical future suffering we reduce by addressing factory farming today do not involve factory farming itself but rather future analogs of it. This is a fair point, and such risks of future analogs to factory farming should indeed be taken seriously. However, by the same token, one can argue that we also address future analogs to wild-animal suffering by working on that problem today, and indeed further argue that this would be a superior focus.

After all, work to address wild-animal suffering appears more wide-ranging and inclusive than does work to address factory farming — for example, it is difficult to imagine a future where we address wild-animal suffering (and analog problems) yet fail to address factory farming (and analog problems). Future scenarios where we address the latter yet fail to address the former seem more plausible, since addressing wild-animal suffering takes a greater level of moral sophistication: it not only requires that we avoid directly harming other beings, but also that we actively help them.

Which brings us to another positive secondary effect of focusing on wild-animal suffering: such a focus embodies and reinforces the virtue of factoring in numbers in our moral deliberations, as well as the virtue of extending our circle of moral concern — and responsibility — to even include beings who suffer for reasons we ourselves had no hand in. It is a focus that reflects a truly universal view of our moral obligations, and it does this to a significantly greater extent than a mere opposition to factory farming or (anthropogenic) animal exploitation in general.

To be clear, I am not claiming that wild-animal suffering is necessarily the best thing to focus on for people trying to reduce suffering in the long-term future (I myself happen to think suffering-focused research of a more general nature is somewhat better). But I do claim that it is a decent candidate, and a better candidate than one is likely to realize when caught up in speculative far-mode sequence thinking.

Either/Or: A false choice

To say that most of us likely have strong biases against prioritizing wild-animal suffering, and that we should give it much greater priority, is not to say that we cannot still support efforts to abolish animal exploitation, and indeed do effective work toward this end.

As I have argued elsewhere, one of the many advantages of antispeciesist advocacy is that it encompasses all non-human animals and all the suffering they endure — anthropogenic as well as naturogenic.


Addendum: An important bias I left out above is the “proportion bias” (Vinding, 2020, 7.6), also known as “proportion dominance“:  our tendency to care more about helping 10 out of 10 individuals than we care about helping 10 out of 100, even though the impact is the same. This bias is especially relevant in the context of wild-animal suffering given the enormous scale at which it continually occurs as a backdrop to any altruistic effort we may pursue.

Chimps, Humans, and AI: A Deceptive Analogy

The prospect of smarter-than-human artificial intelligence (AI) is often presented and thought of in terms of a simple analogy: AI will stand in relation to us the way we stand in relation to chimps. In other words, AI will be qualitatively more competent and powerful than us, and its actions will be as inscrutable to humans as current human endeavors (e.g. science and politics) are to chimps.

My aim in this essay is to show that this is in many ways a false analogy. The difference in understanding and technological competence found between modern humans and chimps is, in an important sense, a zero-to-one difference that cannot be repeated.


Contents

  1. How Are Humans Different from Chimps?
    1. I. Symbolic Language
    2. II. Cumulative Technological Innovation
  2. The Range of Human Abilities Is Surprisingly Wide
  3. Why This Is Relevant

How Are Humans Different from Chimps?

A common answer to this question is that humans are smarter. Specifically, at the level of our individual cognitive abilities, humans, with our roughly three times larger brains, are just far more capable.

This claim no doubt contains a large grain of truth, as humans surely do beat chimps in a wide range of cognitive tasks. Yet it is also false in some respects. For example, chimps have superior working memory compared to humans, and they can apparently also beat humans in certain video games, including games involving navigation in complex mazes.

Researchers who study human uniqueness provide some rather different, more specific answers to the question. If we focus on individual mental differences in particular, researchers have found that, crudely speaking, humans are different from chimps in three principal ways: 1) we can learn language, 2) we have a strong orientation toward social learning, and 3) we are highly cooperative (among our ingroup, compared to chimps).

These differences have in turn resulted in two qualitative differences in the abilities of humans and chimps in today’s world.

I. Symbolic Language

The first qualitative difference is that we humans have acquired an ability to think and communicate in terms of symbolic language that represents complex concepts. We can learn about the deep history of life and about the likely future of the universe, including the fundamental limits to space travel and future computations given our current understanding of physics. Any educated human can learn a good deal about these things whereas no chimp can.

Note how this is truly a zero-to-one difference: no symbolic language versus advanced symbolic language through which knowledge can be represented and continually developed (Deacon, 1997, ch. 1). It is the difference between having no science of physics versus having an extensive such science with which we can predict future events and estimate some hard limits on future possibilities.

In many respects, this zero-to-one difference cannot be repeated. Given that we already have physical models that predict, say, the future motion of planets and the solar system to a high degree of accuracy, the best one can do in this respect is to (slightly) improve the accuracy of these predictions. Such further improvements cannot be compared to going from zero conceptual physics to current physics.

The same point applies to our scientific understanding more generally: we currently have theories that work decently at explaining most of the phenomena around us. And while one can significantly improve the accuracy and sophistication of many of these theories, such further improvements will likely be less significant than the qualitative leap from absolutely no conceptual models to the entire collection of models and theories that we currently have.

For example, going from no understanding of evolution by natural selection to the elaborate understanding of biology we have today can hardly be matched, in terms of qualitative and revolutionary leaps, by further refinements in biology. We have already mapped out the core basics of biology, especially when it comes to the history of life on Earth, and this can only be done once.

The point that the emergence of conceptual understanding is a kind of zero-to-one step has been made by others. Robin Hanson has made essentially the same point in response to the notion that future machines will be “as incomprehensible to us as we are to goldfish”:


This seems to me to ignore our rich multi-dimensional understanding of intelligence elaborated in our sciences of mind (computer science, AI, cognitive science, neuroscience, animal behavior, etc.).

… the ability of one mind to understand the general nature of another mind would seem mainly to depend on whether that first mind can understand abstractly at all, and on the depth and richness of its knowledge about minds in general. Goldfish do not understand us mainly because they seem incapable of any abstract comprehension. …

It seems to me that human cognition is general enough, and our sciences of mind mature enough, that we can understand much about quite a diverse zoo of possible minds, many of them much more capable than ourselves on many dimensions.


Ramez Naam has argued similarly in relation to the idea that there will be some future time or intelligence that current humans are fundamentally unable to understand. He argues that our understanding of the future is growing rather than shrinking as time progresses, and that AI and other future technologies will not be beyond comprehension:


All of those [future technologies] are still governed by the laws of physics. We can describe and model them through the tools of economics, game theory, evolutionary theory, and information theory. It may be that at some point humans or our descendants will have transformed the entire solar system into a living information processing entity — a Matrioshka Brain. We may have even done the same with the other hundred billion stars in our galaxy, or perhaps even spread to other galaxies.

Surely that is a scale beyond our ability to understand? Not particularly. I can use math to describe to you the limits on such an object, how much computing it would be able to do for the lifetime of the star it surrounded. I can describe the limit on the computing done by networks of multiple Matrioshka Brains by coming back to physics, and pointing out that there is a guaranteed latency in communication between stars, determined by the speed of light. I can turn to game theory and evolutionary theory to tell you that there will most likely be competition between different information patterns within such a computing entity, as its resources (however vast) are finite, and I can describe to you some of the dynamics of that competition and the existence of evolution, co-evolution, parasites, symbiotes, and other patterns we know exist.


Chimps can hardly understand human politics and science to a similar extent. Thus, the truth is that there is a strong disanalogy between the understanding that chimps have of humans versus the understanding that we humans — thanks to our conceptual tools — can have of any possible future intelligence (in physical and computational terms, say).

Note that the qualitative leap reviewed above was not one that happened shortly after human ancestors diverged from chimp ancestors. Instead, it was a much more recent leap that has been unfolding gradually since the first humans appeared, and which has continued to accelerate in recent centuries, as we have developed ever more advanced science and mathematics. In other words, this qualitative step has been a product of cultural evolution just as much as biological evolution. Early humans presumably had a roughly similar potential to learn modern language, science, mathematics, and so on. But such conceptual tools could not be acquired in the absence of a surrounding culture able to teach these innovations.

Ramez Naam has made a similar point:


If there was ever a singularity in human history, it occurred when humans evolved complex symbolic reasoning, which enabled language and eventually mathematics and science. Homo sapiens before this point would have been totally incapable of understanding our lives today. We have a far greater ability to understand what might happen at some point 10 million years in the future than they would to understand what would happen a few tens of thousands of years in the future.


II. Cumulative Technological Innovation

The second zero-to-one difference between humans and chimps is that we humans build things and refine our technology over time. To be sure, many non-human animals use tools in the form of sticks and stones, and some even shape primitive tools of their own. But only humans improve and build upon the technological inventions of their ancestors.

Thus, humans are unique in expanding their abilities by systematically exploiting their environment, molding the things around them into increasingly productive self-extensions. We have turned wildlands into crop fields, we have created technologies that can harvest energy, and we have built external memories far more reliable than our own, such as books and hard disks.

This is another qualitative leap that cannot be repeated: the step from having absolutely no cumulative technology to exploiting and optimizing our external environment toward our own ends — the step from having no external memory to having the current repository of stored human knowledge at our fingertips, and from harvesting absolutely no energy (other than through individual digestion) to collectively harvesting and using hundreds of quintillions of Joules every year.

Of course, it is possible to improve on and expand these innovations. We can harvest greater amounts of energy, for example, and create even larger external memories. Yet these are merely quantitative differences, and humanity indeed continually makes such improvements each year. They are not zero-to-one differences that only a new species could bring about.

In sum, we are unique in being the first species that systematically sculpted our surrounding environment and turned it into ever-improving tools. This step cannot be repeated, only expanded further.

Just like the qualitative leap in our symbolic reasoning skills, the qualitative leap in our ability to create technology and shape our environment emerged not between chimps and early humans, but between early humans and today’s humans, as the result of a cultural process occurring over thousands of years. In fact, the two leaps have been closely related: our ability to reason and communicate symbolically has enabled us to create cumulative technological innovation. Conversely, our technologies have allowed us to refine our knowledge and conceptual tools (e.g. via books, telescopes, and particle accelerators); and such improved knowledge has in turn made us able to build even better technologies with which we could advance our knowledge even further, and so on.

This, in a nutshell, is the story of the interdependent growth of human knowledge and technology, a story of recursive self-improvement (Simler, 2019, “On scientific networks”). It is not really a story about the individual human brain per se. After all, the human brain does not accomplish much in isolation. It is more a story about what happened between and around brains: in the exchange of information in networks of brains and in the external creations designed by them — a story made possible by the fact that the human brain is unique in being by far the most cultural brain of all, with its singular capacity to learn from and cooperate with others.

The Range of Human Abilities Is Surprisingly Wide

Another way in which an analogy to chimps is often drawn is by imagining an intelligence scale along which different species are ranked, such that, for example, we have “rats at 30, chimps at 60, the village idiot at 90, the average human at 98, and Einstein at 100”, and where future AI may in turn be ranked many hundreds of points higher than Einstein. According to this picture, it is not just that humans will stand in relation to AI the way chimps stand in relation to humans, but that AI will be far superior still. The human-chimp analogy is, on this view, a severe understatement of the difference between humans and future AI.

Such an intelligence scale may seem intuitively compelling, but how does it correspond to reality? One way to probe this question is to examine the range of human abilities in chess (as but one example that may provide some perspective; it obviously does not represent the full picture by any means).

The standard way to rank chess skills is with the Elo rating system, which is a good predictor of the outcomes of chess games between different players, whether human, digital, or otherwise. An early human beginner will have a rating around 300, a novice around 800, and a rating in the range 2000-2199 is ranked as “Expert”. The highest rating ever achieved is 2882 by Magnus Carlsen.

How large is this range of chess skills in an absolute sense? Remarkably large, it turns out. For example, it took more than four decades from when computers were first able to beat a human chess novice (the 1950s), until a computer was able to beat the best human player (1997, officially). In other words, the span from novice to Kasparov corresponded to more than four decades of progress in both software and hardware, with the hardware progress amounting to a million times more computing power. This alone suggests that the human range of chess skills is rather wide.

Yet the range seems even broader when we consider the upper bounds of chess performance. After all, the fact that it took computers decades to go from human novice to world champion does not mean that the best human is not still far from the best a computer could be in theory. Surprisingly, however, this latter distance does in fact seem quite small. Estimates suggest that the best possible chess machine would have an Elo rating around 3600.

This would mean that the relative distance between the best possible computer and the best human is only around 700 Elo points (the Elo rating is essentially a measure of relative distance; 700 Elo points corresponds to a winning percentage of around 1.5 percent for the losing player).

Thus, the distance between the best human and a chess “Expert” appears similar to the distance between the best human and the best possible chess brain, while the distance between a human beginner and the best human is far greater (2500 Elo points). This stands in stark contrast to the intelligence scale outlined above, which would predict the complete opposite: the distance from a human novice to the best human should be comparatively small whereas the distance from the best human to the optimal brain should be the larger one by far.

Of course, chess is a limited game that by no means reflects all relevant tasks and abilities. Even so, the wide range of human abilities in chess still serves to question popular claims about the supposed narrowness of the human range of ability.

Why This Is Relevant

The errors of the human-chimp analogy are worth highlighting for a few reasons. First, the analogy may lead us to underestimate how much we currently know and are able to understand. To think that intelligent systems of the future will be as incomprehensible to us today as human affairs are to chimps is to underestimate how extensive and universal our current knowledge of the world in fact is — not just when it comes to physical and computational principles, but also in relation to general economic and game-theoretic principles. For example, we know a good deal about economic growth, and this knowledge has a lot to say about how we should expect future intelligent systems to grow. In particular, it suggests that a sudden local AI takeoff scenario (AI-FOOM growth) is unlikely.

The analogy can thus have an insidious influence by making us feel like current theories and trends cannot be trusted much, because look how different humans are from chimps, and look how puny the human brain is compared to ultimate limits. I think this is exactly the wrong way to think about the future. I believe we have good reasons to base our expectations on our best available theories and on a deep study of past trends, including the actual evolution of human competences — not on simple analogies.

Relatedly, the human-chimp analogy is also relevant in that it can lead us to greatly overestimate the probability of a localized AI takeoff scenario. That is, if we get the story about the evolution of human competences so wrong that we think the differences we observe today between chimps and humans reduce chiefly to a story about changes in individual brains — as opposed to a much broader story about biological, cultural, and technological developments — then we are likely to have similarly inaccurate expectations about what comparable “brain innovations” in some individual machine would lead to on their own.

If the human-chimp analogy causes us to overestimate the probability of a localized AI takeoff scenario, it may nudge us toward focusing too much on some single, concentrated future thing that we expect to be all-important: the AI that suddenly becomes qualitatively more competent than humans. In effect, the human-chimp analogy can lead us to neglect broader factors, such as cultural and institutional developments.

To be clear, the points above are by no means a case for complacency about risks from AI. It is important that we get a clear picture of such risks, and that we allocate our resources accordingly. But this requires us to rely on accurate models of the world. If we overemphasize one set of risks, we are by necessity underemphasizing others.

Suffering-Focused Ethics: Defense and Implications

The reduction of suffering deserves special priority. Many ethical views support this claim, yet so far these have not been presented in a single place. Suffering-Focused Ethics provides the most comprehensive presentation of suffering-focused arguments and views to date, including a moral realist case for minimizing extreme suffering. The book then explores the all-important issue of how we can best reduce suffering in practice, and outlines a coherent and pragmatic path forward.

Amazon (Kindle, paperback, and hardcover)
Apple Books, Barnes & Noble, Kobo, Scribd, Vivlio, 24symbols, Angus & Robertson
Smashwords
Paperback PDF
Audible/Amazon (audiobook)

Suffering-Focused Ethics - 3D


“An inspiring book on the world’s most important issue. Magnus Vinding makes a compelling case for suffering-focused ethics. Highly recommended.”
— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“We live in a haze, oblivious to the tremendous moral reality around us. I know of no philosopher who makes the case more resoundingly than Magnus Vinding. In radiantly clear and honest prose, he demonstrates the overwhelming ethical priority of preventing suffering. Among the book’s many powerful arguments, I would call attention to its examination of the overlapping biases that perpetuate moral unawareness. Suffering-Focused Ethics will change its readers, opening new moral and intellectual vistas. This could be the most important book you will ever read.
Jamie Mayerfeld, professor of political science at the University of Washington, author of Suffering and Moral Responsibility and The Promise of Human Rights

“In this important undertaking, Magnus Vinding methodically and convincingly argues for the overwhelming ethical importance of preventing and reducing suffering, especially of the most intense kind, and also shows the compatibility of this view with various mainstream ethical philosophies that don’t uniquely focus on suffering. His careful analytical style and comprehensive review of existing arguments make this book valuable reading for anyone who cares about what matters, or who wishes to better understand the strong rational underpinning of suffering-focused ethics.”
— Jonathan Leighton, founder of the Organisation for the Prevention of Intense Suffering, author of The Battle for Compassion: Ethics in an Apathetic Universe

“Magnus Vinding breaks the taboo: Today, the problem of suffering is the elephant in the room, because it is at the same time the most relevant and the most neglected topic at the logical interface between applied ethics, cognitive science, and the current philosophy of mind and consciousness. Nobody wants to go there. It is not good for your academic career. Only few of us have the intellectual honesty, the mental stamina, the philosophical sincerity, and the ethical earnestness to gaze into the abyss. After all, it might also gaze back into us. Magnus Vinding has what it takes. If you are looking for an entry point into the ethical landscape, if you are ready to face the philosophical relevance of extreme suffering, then this book is for you. It gives you all the information and the conceptual tools you need to develop your own approach. But are you ready?”
Thomas Metzinger, professor of philosophy at the Johannes Gutenberg University of Mainz, author of Being No One and The Ego Tunnel

Animal Advocates Should Focus On Antispeciesism, Not Veganism

First published: Dec. 2016.

How can we help nonhuman animals as much as possible? A good answer to this question could spare billions from suffering and death, while a bad one could condemn as many to that fate. So it’s worth taking our time to find good answers.

Focusing our advocacy on antispeciesism may be our best bet. In short, antispeciesist advocacy looks promising because it encompasses all nonhuman animals and implies great obligations toward them, and also because people may be especially receptive to such advocacy. More than that, antispeciesism is also likely to remain relevant for a long time, which makes it seem uniquely robust when we consider things from a very long-term perspective.

The value of antispeciesist advocacy

Antispeciesism addresses all the ways in which we discriminate against nonhuman animals, not just select sites of that discrimination, like circuses or food farms. Unlike more common approaches to animal advocacy, it demands that we take all forms of suffering endured by nonhuman animals into consideration.

Campaigns against fur farming, for instance, do not also cover the suffering and death involved in other forms of speciesist exploitation, such as the egg and dairy industries. Veganism, on the other hand, is much broader, in that it rejects all directly human-caused animal suffering. Advocating for the interests of comparatively few beings when we could advocate for the interests of many more with the same time and resources is likely a lost opportunity.

But even veganism is not as broad as antispeciesism, since it says nothing about the vast majority of sentient beings on the planet: animals who live in nature. Wild animals also suffer, and should not be granted less consideration simply because their suffering is not our fault.

Antispeciesism implies veganism – i.e. that we “exclude, as far as is possible and practicable, all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose” – but unlike veganism it also requires us to give serious consideration to nonhuman animals who are harmed in nature. Antispeciesism implies that we should help wild animals in need, just as we should help humans suffering from starvation or disease that we didn’t cause. Unfortunately, nonhuman animals are often harmed in nature, and often do succumb to starvation and thirst. Fortunately, there is much we can do to work for a future with fewer harms to them.


image-placeholder-title-228

Even if we expect people to be more receptive to messaging that is narrower in focus and easier to agree with, the all-encompassing nature of antispeciesist advocacy could mean it has greater value overall.

But are people really less receptive to such advocacy anyways? The concept of speciesism may seem abstract and advanced, and may strike us as something only committed animal rights advocates know of and understand. Yet there are reasons to think that this gut intuition is wrong.

Oscar Horta, a professor of moral philosophy who has delivered talks about animal rights around the world, has repeatedly put this pessimistic intuition to the test. At various talks delivered to Spanish high school students, he has attempted to systematically evaluate the attitudes of the attendees by giving them a questionnaire. One of the main results of this evaluation, according to Horta, was that “contrary to what some people think, most people who attended these talks accepted the arguments against speciesism.”[1]

So reportedly, a majority of attendees accepted the arguments against speciesism. And perhaps we should not be that surprised. Most people understand the concept of discrimination already, and speciesism is just another form of discrimination. The fact that many people are already familiar with the concept of discrimination and agree that it is not justified suggests that there might be a template upon which speciesism can easily be argued against. This could partly explain why most of Horta’s attendees accepted the arguments against speciesism. An additional reason might be that the arguments against speciesism are exceptionally strong and hard to argue with.

Another interesting finding from Horta was that students appeared more receptive to a message opposing speciesism than to one supporting veganism. As he reports:

What is controversial is not really the discussion about speciesism. On the contrary, the most controversial point is (as might be expected), the discussion about whether we should stop eating animal “products”. Yet this discussion can also be carried out without major problems, at least if a couple of recommendations are followed: First of all, that this discussion arises not at the beginning of the talk, but rather towards the end, when speciesism and the need to respect all sentient beings has already been discussed. At that point, there is a greater willingness to consider this issue, because people who attend the talk then have a favorable attitude both toward animals and the speaker. But if we proceed in the opposite order and first argue for veganism and then raise the arguments about speciesism, the reaction is different. The result is that there is less willingness to consider the issue of veganism. And not only that, acceptance of arguments about speciesism is lower as well.

If this difference in effectiveness between vegan and antispeciesist messaging is similar in the broader public, the implications for advocacy are profound: even if our goal were only to promote veganism, the best way to do so might be to talk about speciesism, rather than, or at least before, talking about veganism. That talking about veganism straight away seems to have made the students less receptive not only to veganism itself but also to arguments against speciesism is also worth taking note of.

More thorough replication of Horta’s findings, on larger, more varied populations would significantly increase our confidence in the conclusion that antispeciesist advocacy is superior to vegan advocacy for creating antispeciesists, as well as vegans. Until then, Horta’s reported findings do at least suggest that people can accept arguments against speciesism.

Is vegan advocacy costly to wild animals?

Vegan advocacy could also be costly to animals not encompassed by vegan advocacy. Horta states:

There are many people involved in antispeciesism who are afraid to defend the idea that we should help animals in need in nature. Even though they fully agree with it, they believe that most people totally reject that idea, and even consider it absurd. However, among those attending the talks, there was a broad acceptance of the idea.

This is good news for animals and their advocates, given that the vast majority of nonhuman animals live in nature. Helping animals in the wild, such as through vaccinations and cures for diseases, may be among the most effective ways in which we can help nonhuman animals. Vegan advocacy excludes consideration of their interests, but antispeciesist advocacy does not.

This means that not only might it be costly to focus mainly on veganism in the interest of spreading veganism itself (compared to focusing mainly on speciesism and then raising the issue of veganism), but it might also be costly with respect to the goal of helping animals in nature. It’s possible that talking about veganism rather than speciesism makes it significantly harder to bring about interventions that could help nonhuman animals.

Compared to veganism, antispeciesism is also much harder to confuse with environmentalism, supporters of which often recommend overtly speciesist interventions such as the mass killing of beings in the name of “healthy ecosystems” and biodiversity. This lack of potential for confusion is another strong reason in favor of antispeciesist advocacy.

Beyond veganism

Antispeciesist advocacy is also much more neglected than vegan advocacy. Veganism is rising, and there are considerable incentives entirely separate from concern for nonhuman animals to move away from the production of animal “products”. In economic terms, it is inefficient to sustain an animal in order to use her flesh and skin rather than to grow meat and other animal-derived products directly, or replace them with plant-based alternatives. Similarly strong incentives exist in the realm of public health, which animal agriculture threatens by increasing the risks of zoonotic diseases, antibiotic resistant bacteria like MRSA, and cardiovascular disease. These incentives, none of which have anything to do with concern for nonhuman animals per se, could well be pushing humanity toward veganism more powerfully than anything else.

While veganism likely has a promising future, the future of antispeciesism seems much less clear and less promising, and has far fewer people working to promote it. This suggests that our own limited resources might be better spent promoting the latter. When thinking about how to build a better tomorrow, we should also consider the following tomorrows, and if we have a virtually vegan world a century from now due to the incentives mentioned above, the world will likely still be speciesist in many other respects. So in addition to the appeal antispeciesist advocacy has for the nonhuman animals whom humans are actively harming now, the explicitly antispeciesist approach is important for the sake of nonhuman animals in the future. Working towards a less speciesist future could both help close down the slaughterhouses, and help many animals long after.

Additionally, the spread of antispeciesism might also be a useful stepping stone toward concern for sentient beings of nonanimal kinds. Unfortunately, there is a risk that new kinds of sentient beings could emerge in the future – for instance, biologically engineered brains – and become the victims of a whole new kind of factory farming. Just like concern for humans who face discrimination can provide useful support today when the case against speciesism is made, antispeciesism could well be similarly generalizable and provide such support in the case against new forms of discrimination.

A final point in favor of antispeciesist advocacy over vegan advocacy is that the message of the former is clearly ethico-political in nature, and therefore does not risk being confused with an amoral consumerist preference or fad, as veganism often is. The core of antispeciesism is clear, easy to communicate, and much follows from it in terms of the practical implications.

Additional Resources

Oscar Horta provides more reasons to favor antispeciesist advocacy in a talk entitled “About Strategies”. See also my Notes on the Utility of Antispeciesist Advocacy.

This article was originally published on the website of Sentience Politics.


[1] My own machine-assisted translation

The future of growth: Near-zero growth rates

First written: Jul. 2017. Last update: Nov. 2022.

Exponential growth is a common pattern found throughout nature. Yet it is also a pattern that tends not to last, as growth rates tend to decline sooner or later.

In biology, this pattern of exponential growth that wanes off is found in everything from the development of individual bodies — for instance, in the growth of humans, which levels off in the late teenage years — to population sizes.

One may of course be skeptical that this general trend will also apply to the growth of our technology and economy at large, as innovation seems to continually postpone our clash with the ceiling, yet it seems inescapable that it must. For in light of what we know about physics, we can conclude that exponential growth of the kinds we see today, in technology in particular and in our economy more generally, must come to an end, and do so relatively soon.

Limits to growth

Physical limits to computation and Moore’s law

One reason we can make this assertion is that there are theoretical limits to computation. As physicist Seth Lloyd’s calculations show, a continuation of Moore’s law — in its most general formulation: “the amount of information that computers are capable of processing and the rate at which they process it doubles every two years” — would imply that we hit the theoretical limits of computation within 250 years:

If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore’s law into the future, then it will only take two hundred and fifty years to make up the forty orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our one kilogram ultimate laptop that performs 1051 operations per second on 1031 bits.

Similarly, physicists Lawrence Krauss and Glenn Starkman have calculated that, even if we factor in colonization of space at the speed of light, this doubling of processing power cannot continue for more than 600 years in any civilization:

Our estimate for the total information processing capability of any system in our Universe implies an ultimate limit on the processing capability of any system in the future, independent of its physical manifestation and implies that Moore’s Law cannot continue unabated for more than 600 years for any technological civilization.

In a more recent lecture and a subsequent interview, Krauss said that the absolute limit for the continuation of Moore’s law, in our case, would be reached in less than 400 years (the discrepancy — between the numbers 400 and 600 — is at least in part because Moore’s law, in its most general formulation, has played out for more than a century in our civilization at this point). And, as both Krauss and Lloyd have stressed, these are ultimate theoretical limits, resting on assumptions that are unlikely to be met in practice, such as expansion at the speed of light. What is possible, in terms of how long Moore’s law can continue for, given both engineering and economic constraints is likely significantly less. Indeed, we are already close to approaching the physical limits of the paradigm that Moore’s law has been riding on for more than 50 years — silicon transistors, the only paradigm that Gordon Moore was talking about originally — and it is not clear whether other paradigms will be able to take over and keep the trend going.

Limits to the growth of energy use

Physicist Tom Murphy has calculated a similar limit for the growth of the energy consumption of our civilization. Based on the observation that the energy consumption of the United States has increased fairly consistently with an average annual growth rate of 2.9 percent over the last 350 odd years (although the growth rate appears to have slowed down in recent times and been stably below 2.9 since c. 1980), Murphy proceeds to derive the limits for the continuation of similar energy growth. He does this, however, by assuming an annual growth rate of “only” 2.3 percent, which conveniently results in an increase of the total energy consumption by a factor of ten every 100 years. If we assume that we will continue expanding our energy use at this rate by covering Earth with solar panels, this would, on Murphy’s calculations, imply that we will have to cover all of Earth’s land with solar panels in less than 350 years, and all of Earth, including the oceans, in 400 years.

Beyond that, assuming that we could capture all of the energy from the sun by surrounding it in solar panels, the 2.3 percent growth rate would come to an end within 1,350 years from now. And if we go further out still, to capture the energy emitted from all the stars in our galaxy, we get that this growth rate must hit the ceiling and become near-zero within 2,500 years (of course, the limit of the physically possible must be hit earlier, indeed more than 500 years earlier, as we cannot traverse our 100,000 light year-wide Milky Way in only 2,500 years).

One may suggest that alternative sources of energy might change this analysis significantly, yet, as Murphy notes, this does not seem to be the case:

Some readers may be bothered by the foregoing focus on solar/stellar energy. If we’re dreaming big, let’s forget the wimpy solar energy constraints and adopt fusion. The abundance of deuterium in ordinary water would allow us to have a seemingly inexhaustible source of energy right here on Earth. We won’t go into a detailed analysis of this path, because we don’t have to. The merciless growth illustrated above means that in 1400 years from now, any source of energy we harness would have to outshine the sun.

Essentially, keeping up the annual growth rate of 2.3 percent by harnessing energy from matter not found in stars would force us to make such matter hotter than stars themselves. We would have to create new stars of sorts, and, even if we assume that the energy required to create such stars is less than the energy gained, such an endeavor would quickly run into limits as well. For according to one estimate, the total mass of the Milky Way, including dark matter, is only 20 times greater than the mass of its stars. Assuming a 5:1 ratio of dark matter to ordinary matter, this implies that that there is only about 3.3 times as much ordinary non-stellar matter as there is stellar matter in our galaxy. Thus, even if we could convert all this matter into stars without spending any energy and harvest the resulting energy, this would only give us about 50 years more of keeping up with the annual growth rate of 2.3 percent.1

Limits derived from economic considerations

Similar conclusions to the ones drawn above for computation and energy also seem to follow from calculations of a more economic nature. For, as economist Robin Hanson has argued, projecting present economic growth rates into the future also leads to a clash against fundamental limits:

Today we have about ten billion people with an average income about twenty times subsistence level, and the world economy doubles roughly every fifteen years. If that growth rate continued for ten thousand years[,] the total growth factor would be 10200.

There are roughly 1057 atoms in our solar system, and about 1070 atoms in our galaxy, which holds most of the mass within a million light years. So even if we had access to all the matter within a million light years, to grow by a factor of 10200each atom would on average have to support an economy equivalent to 10140 people at today’s standard of living, or one person with a standard of living 10140 times higher, or some mix of these.

Indeed, current growth rates would “only” have to continue for three thousand years before each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard, which already seems rather implausible (not least because we can only access a tiny fraction of “all the matter within a million light years” in three thousand years). Hanson does not, however, expect the current growth rate to remain constant, but instead, based on the history of growth rates, expects a new growth mode where the world economy doubles within 15 days rather than 15 years:

If a new growth transition were to be similar to the last few, in terms of the number of doublings and the increase in the growth rate, then the remarkable consistency in the previous transitions allows a remarkably precise prediction. A new growth mode should arise sometime within about the next seven industry mode doublings (i.e., the next seventy years) and give a new wealth doubling time of between seven and sixteen days.

And given this more than a hundred times greater growth rate, the net growth that would take 10,000 years to accomplish given our current growth rate (cf. Hanson’s calculation above) would now take less than a century to reach, while growth otherwise requiring 3,000 years would require less than 30 years. So if Hanson is right, and we will see such a shift within the next seventy years, what seems to follow is that we will reach the limits of economic growth, or at least reach near-zero growth rates, within a century or two. Such a projection is also consistent with the physically derived limits of the continuation of Moore’s law; not that economic growth and Moore’s law are remotely the same, yet they are no doubt closely connected: economic growth is largely powered by technological progress, of which Moore’s law has been a considerable subset in recent times.

The conclusion we reach by projecting past growth trends in computing power, energy, and the economy is the same: our current growth rates cannot go on forever. In fact, they will have to decline to near-zero levels very soon on a cosmic timescale. Given the physical limits to computation, and hence, ultimately, to economic growth, we can conclude that we must be close to the point where peak relative growth in our economy and our ability to process information occurs — that is, the point where this growth rate is the highest in the entire history of our civilization, past and future.

Peak growth might lie in the past

This is not, however, to say that this point of maximum relative growth necessarily lies in the future. Indeed, in light of the declining economic growth rates we have seen over the last few decades, it cannot be ruled out that we are now already past the point of “peak economic growth” in the history of our civilization, with the highest growth rates having occurred around 1960-1980, cf. these declining growth rates and this essay by physicist Theodore Modis. This is not to say that we most likely are, yet it seems that the probability that we are is non-trivial.

A relevant data point here is that the global economy has seen three doublings since 1965, where the annual growth rate was around six percent, and yet the annual growth rate today is only a little over half — around 3 percent — of, and lies stably below, what it was those three doublings ago. In the entire history of economic growth, this seems unprecedented, suggesting that we may already be on the other side of the highest growth rates we will ever see. For up until this point, a three-time doubling of the economy has, rare fluctuations aside, led to an increase in the annual growth rate.

And this “past peak growth” hypothesis looks even stronger if we look at 1955, with a growth rate of a little less than six percent and a world product at 5,430 billion 1990 U.S dollars, which doubled four times gives just under 87,000 billion — about where we should expect today’s world product to be. Yet throughout the history of our economic development, four doublings has meant a clear increase in the annual growth rate, at least in terms of the underlying trend; not a stable decrease of almost 50 percent. This tentatively suggests that we should not expect to see growth rates significantly higher than those of today sustained in the future.

Could we be past peak growth in science and technology?

That peak growth lies in the past may also be true of technological progress in particular, or at least many forms of technological progress, including the progress in computing power tracked by Moore’s law, where the growth rate appears to have been highest around 1990-2005, and to since have been in decline, cf. this article and the first graphs found here and here. Similarly, various sources of data and proxies tracking the number of scientific articles published and references cited over time also suggest that we could be past peak growth in science as well, at least in many fields when evaluated based on such metrics, with peak growth seeming to have been reached around 2000-2010.

Yet again, these numbers — those tracking economic, technological, and scientific progress — are of course closely connected, as growth in each of these respects contributes to, and is even part of, growth in the others. Indeed, one study found the doubling time of the total number of scientific articles in recent decades to be 15 years, corresponding to an annual growth rate of 4.7 percent, strikingly similar to the growth rate of the global economy in recent decades. Thus, declining growth rates both in our economy, technology, and science cannot be considered wholly independent sources of evidence that growth rates are now declining for good. We can by no means rule out that growth rates might increase in all these areas in the future — although, as we saw above with respect to the limits of Moore’s law and economic progress, such an increase, if it is going to happen, must be imminent if current growth rates remain relatively stable.

Might recent trends make us bias-prone?

How might it be relevant that we may be past peak economic growth at this point? Could it mean that our expectations for the future are likely to be biased? Looking back toward the 1960s might be instructive in this regard. For when we look at our economic history up until the 1960s, it is not so strange that people made many unrealistic predictions about the future around this period. Because not only might it have appeared natural to project the high growth rate at the time to remain constant into the future, which would have led to today’s global GDP being more than twice of what it is; it might also have seemed reasonable to predict the growth rates to keep on rising even further. After all, that was what they had been doing consistently up until that point, so why should it not continue in the following decades, resulting in flying cars and conversing robots by the year 2000? Such expectations were not that unreasonable given the preceding economic trends.

The question is whether we might be similarly overoptimistic about future economic progress today given recent, possibly unique, growth trends, specifically the unprecedented increase in absolute annual growth that we have seen over the past two decades. The same may apply to the trends in scientific and technological progress cited above, where peak growth in many areas appears to have happened in the period 1990-2010, meaning that we could now be at a point where we are disposed to being overoptimistic about further progress.

Yet, again, it is highly uncertain at this point whether growth rates, of the economy in general and of progress in technology and science in particular, will increase again in the future. Future economic growth may not conform well to the model with roughly symmetric growth rates around the 1960s, although the model certainly deserves some weight. All we can say for sure is that growth rates must become near-zero relatively soon. What the path toward that point will look like remains an open question. We could well be in the midst of a temporary decline in growth rates that will be followed by growth rates significantly greater than those of the 1960s, cf. the new growth mode envisioned by Robin Hanson.2

Implications: This is an extremely special time

Applying the mediocrity principle, we should not expect to live in an extremely unique time. Yet, in light of the facts about the ultimate limits to growth seen above, it is clear that we do: we are living during the childhood of civilization where there is still rapid growth, at the pace of doublings within a couple of decades. If civilization persists with similar growth rates, it will soon become a grown-up with near-zero relative growth. And it will then look back at our time — today plus minus a couple of centuries, most likely — as the one where growth rates were by far the highest in its entire history, which may be more than a trillion years.

It seems that a few things follow from this. First, more than just being the time where growth rates are the highest, this may also, for that very reason, be the time where individuals can influence the future of civilization more than any other time. In other words, this may be the time where the outcome of the future is most sensitive to small changes, as it seems plausible, although far from clear, that small changes in the trajectory of civilization are most significant when growth rates are highest. An apt analogy might be a psychedelic balloon with fluctuating patterns on its surface, where the fluctuations that happen to occur when we blow up the balloon will then also be blown up and leave their mark in a way that fluctuations occurring before and after this critical growth period will not (just like quantum fluctuations in the early universe got blown up during cosmic expansion, and thereby in large part determined the grosser structure of the universe today). Similarly, it seems much more difficult to cause changes across all of civilization when it spans countless star systems compared to today.

That being said, it is not obvious that small changes — in our actions, say — are more significant in this period where growth rates are many orders of magnitude higher than in any other time. It could also be that such changes are more consequential when the absolute growth is the highest. Or perhaps when it is smallest, at least as we go backwards in time, as there were far fewer people back when growth rates were orders of magnitude lower than today, and hence any given individual comprised a much greater fraction of all individuals than an individual does today.

Still, we may well find ourselves in a period where we are uniquely positioned to make irreversible changes that will echo down throughout the entire future of civilization.3 To the extent that we are, this should arguably lead us to update toward trying to influence the far future rather than the near future. More than that, if it does hold true that the time where the greatest growth rates occur is indeed the time where small changes are most consequential, this suggests that we should increase our credence in the simulation hypothesis. For if realistic sentient simulations of the past become feasible at some point, the period where the future trajectory of civilization seems the most up for grabs would seem an especially relevant one to simulate and learn more about. However, one can also argue that the sheer historical uniqueness of our current growth rates alone, regardless of whether this is a time where the fate of our civilization is especially volatile, should lead us to increase this credence, as such uniqueness may make it a more interesting time to simulate, and because being in a special time in general should lead us to increase our credence in the simulation hypothesis (see for instance this talk for a case for why being in a special time makes the simulation hypothesis more likely).4

On the other hand, one could also argue that imminent near-zero growth rates, along with the weak indications that we may now be past peak growth in many respects, provide a reason to lower our credence in the simulation hypothesis, as these observations suggest that the ceiling for what will be feasible in the future may be lower than we naively expect in light of today’s high growth rates. And thus, one could argue, it should make us more skeptical of the central premise of the simulation hypothesis: that there will be (many) ancestor simulations in the future. To me, the consideration in favor of increased credence seems stronger, although it does not significantly move my overall credence in the hypothesis, as there are countless other factors to consider.5


Appendix: Questioning our assumptions

Caspar Oesterheld pointed out to me that it might be worth meditating on how confident we can be in these conclusions given that apparently solid predictions concerning the ultimate limits to growth have been made before, yet quite a few of these turned out to be wrong. Should we not be open to the possibility that the same might be true of (at least some of) the limits we reviewed in the beginning of this essay?

Could our understanding of physics be wrong?

One crucial difference to note is that these failed predictions were based on a set of assumptions — e.g. about the amount of natural resources and food that would be available — that seem far more questionable than the assumptions that go into the physics-based predictions we have reviewed here: that our apparently well-established physical laws and measurements indeed are valid, or at least roughly so. The epistemic status of this assumption seems a lot more solid, to put it mildly. So there does seem to be a crucial difference here. This is not to say, however, that we should not maintain some degree of doubt as to whether this assumption is correct (I would argue that we always should). It just seems that this degree of doubt should be quite low.

Yet, to continue the analogy above, what went wrong with the aforementioned predictions was not so much that limits did not exist, but rather that humans found ways of circumventing them through innovation. Could the same perhaps be the case here? Could we perhaps some day find ways of deriving energy from dark energy or some other yet unknown source, even though physicists seem skeptical? Or could we, as Ray Kurzweil speculates, access more matter and energy by finding ways of travelling faster than light, or by finding ways of accessing other parts of our notional multiverse? Might we even become able to create entirely new ones? Or to eventually rewrite the laws of nature as we please? (Perhaps by manipulating our notional simulators?) Again, I do not think any of these possibilities can be ruled out completely. Indeed, some physicists argue that the creation of new pocket universes might be possible, not in spite of “known” physical principles (or rather theories that most physicists seem to believe, such as inflationary theory), but as a consequence of them. However, it is not clear that anything from our world would be able to expand into, or derive anything from, the newly created worlds on any of these models (which of course does not mean that we should not worry about the emergence of such worlds, or the fate of other “worlds” that we perhaps could access).

All in all, the speculative possibilities raised above seem unlikely, yet they cannot be ruled out for sure. The limits we have reviewed here thus represent a best estimate given our current, admittedly incomplete, understanding of the universe in which we find ourselves, not an absolute guarantee. However, it should be noted that this uncertainty cuts both ways, in that the estimates we have reviewed could also overestimate the limits to various forms of growth by countless orders of magnitude.

Might our economic reasoning be wrong?

Less speculatively, I think, one can also question the validity of our considerations about the limits of economic progress. I argued that it seems implausible that we in three thousand years could have an economy so big that each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard. Yet could one not argue that the size of the economy need not depend on matter in this direct way, and that it might instead depend on the possible representations that can be instantiated in matter? If economic value could be mediated by the possible permutations of matter, our argument about a single atom’s need to support entire economies might not have the force it appears to have. For instance, there are far more legal positions on a Go board than there are atoms in the visible universe, and that’s just legal positions on a Go board. Perhaps we need to be more careful when thinking about how atoms might be able to create and represent economic value?

It seems like there is a decent point here. Still, I think economic growth at current rates is doomed. First, it seems reasonable to be highly skeptical of the notion that mere potential states could have any real economic value. Today at least, what we value and pay for is not such “permutation potential”, but the actual state of things, which is as true of the digital realm as of the physical. We buy and stream digital files such as songs and movies because of the actual states of these files, while their potential states mean nothing to us. And even when we invest in something we think has great potential, like a start-up, the value we expect to be realized is still ultimately one that derives from its actual state, namely the actual state we hope it will assume, not its number of theoretically possible permutations.

It is not clear why this would change, or how it could. After all, the number of ways one can put all the atoms in the galaxy together is the same today as it will be ten thousand years from now. Organizing all these atoms into a single galactic supercomputer would only seem to increase the value of their actual state.

Second, economic growth still seems tightly constrained by the shackles of physical limitations. For it seems inescapable that economies, of any kind, are ultimately dependent on the transfer of resources, whether these take the form of information or concrete atoms. And such transfers require access to energy, the growth of which we know to be constrained, as is true of the growth of our ability to process information. As these underlying resources that constitute the lifeblood of any economy stop growing, it seems unlikely that the economy can avoid this fate as well. (Tom Murphy touches on similar questions in his analysis of the limits to economic growth.)

Again, we of course cannot exclude that something crucial might be missing from these considerations. Yet the conclusion that economic growth rates will decline to near-zero levels relatively soon, on a cosmic timescale at least, still seems a safe bet in my view.

Acknowledgments

I would like to thank Brian Tomasik, Caspar Oesterheld, Duncan Wilson, Kaj Sotala, Lukas Gloor, Magnus Dam, Max Daniel, and Tobias Baumann for valuable comments and inputs. This essay was originally published at the website of the Foundational Research Institute, now the Center on Long-Term Risk. 


Notes

1. One may wonder whether there might not be more efficient ways to derive energy from the non-stellar matter in our galaxy than to convert it into stars as we know them. I don’t know, yet a friend of mine who does research in plasma physics and fusion says that he does not think one could, especially if we, as we have done here, disregard the energy required to clump the dispersed matter together so as to “build” the star, a process that may well take more energy than the star can eventually deliver.

The aforementioned paper by Lawrence Krauss and Glenn Starkman also contains much information about the limits of energy use, and in fact uses accessible energy as the limiting factor that bounds the amount of information processing any (local) civilization could do (they assume that the energy that is harvested is beamed back to a “central observer”).

2. It should be noted, though, that Hanson by no means rules out that such a growth mode may never occur, and that we might already be past, or in the midst of, peak economic growth: “[…] it is certainly possible that the economy is approaching fundamental limits to economic growth rates or levels, so that no faster modes are possible […]”

3. The degree to which there is sensitivity to changes of course varies between different endeavors. For instance, natural science seems more convergent than moral philosophy, and thus its development is arguably less sensitive to the particular ideas of individuals working on it than the development of moral philosophy is.

4. One may then argue that this should lead us to update toward focusing more on the near future. This may be true. Yet should we update more toward focusing on the far future given our ostensibly unique position to influence it? Or should we update more toward focusing on the near future given increased credence in the simulation hypothesis? (Provided that we indeed do increase this credence, cf. the counter-consideration above.) In short, it mostly depends on the specific probabilities we assign to these possibilities. I myself happen to think the far future should dominate, as I assign the simulation hypothesis (as commonly conceived) a very small probability.

5. For instance, fundamental epistemological issues concerning how much one can infer based on impressions from a simulated world (which may only be your single mind) about a simulating one (e.g. do notions such as “time” and “memory” correspond to anything, or even make sense, in such a “world”?); the fact that the past cannot be simulated realistically, since we can only have incomplete information about a given physical state in the past (not only because we have no way to uncover all the relevant information, but also because we cannot possibly represent it all, even if we somehow could access it — for instance, we cannot faithfully represent the state of every atom in our solar system in any point in the past, as this would require too much information), and a simulation of the past that contains incomplete information would depart radically from how the actual past unfolded, as all of it has a non-negligible causal impact (even single photons, which, it appears, are detectable by the human eye), and this is especially true given that the vast majority of information would have to be excluded (both due to practical constraints to what can be recovered and what can be represented); whether conscious minds can exist on different levels of abstraction; etc.

Free Will: Emphasizing Possibilities

I suspect that a key issue in discussions and worries about (the absence of) “free will” is the issue of possibilities. I also think it is a major source of confusion. Different people are talking about possibilities in different senses without being clear about it, which leads them to talk past each other, and perhaps even to confuse and dispirit laypeople by making them feel they have no possibilities in any sense whatsoever.

Different Emphases

Thinkers who take different positions on free will tend to emphasize different things. One camp tends to say “we don’t have free will, since our actions emerge from prior causes that are ultimately beyond our own control”.

Another camp, so-called compatibilists, will tend to agree with the latter point about prior causes, but they choose to emphasize possibilities: “complex agents can act within a range of possibilities in a way crude objects like rocks cannot, and such agents truly do weigh and choose between these options”.

In essence, what I think the latter camp is emphasizing is the fact that, when we make decisions, we have ex-ante possibilities: a range of options we can choose from in expectation. For example, in a game of chess, your ex-ante possibilities are the set of moves allowed by the rules of the game. And since compatibilists tend to define free will roughly as the ability to make choices among such ex-ante possibilities, they conclude that we indeed do have free will.

I doubt that any philosopher arguing against the existence of free will would deny that we have ex-ante possibilities. After all, we all conceive of various possibilities in our minds that we weigh and choose between, and we arguably cannot talk meaningfully about ethics, or choices in general, without such a framework of ex-ante possibilities.

Given the apparent agreement on these two core points — (1) our actions emerge from prior causes, and (2) we have ex-ante possibilities — the difference between the two camps mostly appears to lie in how they define the term “free will” and whether they prefer to mostly emphasize point (1) or (2).

The “Right” Definition of Free Will

People in these two camps will often insist that their definition of free will is the one that matches what most people mean by free will. It seems to me that both camps are partly right and partly wrong about this. I think it is misguided to believe that most people have anything close to a clear definition of free will in their minds, as opposed to having a jumbled network of associations that relate to a wide range of notions, including notions of being unconstrained by prior causes and notions of ex-ante possibilities.

Indeed, experimental philosophy seems to paint a nuanced picture of people’s intuitions and conceptions of “free will”, and it reveals these conceptions to be quite unclear and conflicting, as one would expect.

Emphasizing Both

I believe that the two distinct emphases outlined above are both important yet insufficient on their ownThe emphasis on prior causes is important for understanding the nature of our choices and actions. In particular, it helps us understand that our choices do not represent a break with physical mechanism, but that they are indeed the product of complex mechanisms of this kind (which include the mechanisms of our knowledge and intentions, as well as the mechanism of weighing various ex-ante possibilities).

This emphasis may help free us from certain bad ideas about human choices, such as naive ideas about how anyone can always pull themselves up by their bootstraps. It may also help us construct better incentives and institutions based on an actual understanding of how we make choices. Lastly, it may help us become more compassionate and understanding toward others, such as by reminding us that we cannot reasonably expect people to act on knowledge that they do not possess.

Likewise, emphasizing our ex-ante possibilities is important for our ability to make good decisions. If we mistakenly believe that we have absolutely no possibilities to choose from, we will likely create highly sub-optimal outcomes, whether it be in a game of chess or a major life decision. Aiming to choose the ex-ante possibility that seems best in expectation is crucial for us to make good choices. Indeed, this is arguably what good decision-making is all about.

Moreover, an emphasis on ex-ante possibilities may help instill in us the healthy and realistic versions of bootstrap-pulling attitudes, namely that hard work and dedication are often worthwhile and they truly can lead us in better directions.

Both Emphases Have Pitfalls in Isolation

Our minds intuitively draw inferences and make associations based on the things we hear. When it comes to “free will”, I suspect most of us have quite leaky conceptual networks, in that the distinct clusters of sentiments that we intuitively tie to the term “free will” readily cross-pollute each other.

So when someone says “we don’t have any non-physical free will”, some people might mistakenly interpret this as implying “we don’t have ex-ante possibilities, and hence we cannot meaningfully think in terms of alternative future possibilities”. This might in turn lead to bad decisions and feelings of disempowerment. It may also lead some people to think that it makes no sense to punish bad behavior, or that we cannot meaningfully say things like “you really should have made a better choice”. Yet these things do make sense. They create incentives by making a promise for the future — “people who act like this will pay a price” — which in turn nudges people toward some of their ex-ante possibilities over others.

Likewise, a naive emphasis on the causal origins of our actions may incline people to think that certain feelings — such as pride, regret, and hatred — are always unreasonable and should never be entertained. Yet this does not follow either. These feelings may have great utility in certain circumstances, even if such circumstances might be rare.

Another source of confusion is to say that our causal nature implies that everything is just a matter of luck. Although this is perhaps true in some ultimate sense, in another sense — the everyday sense that distinguishes between things won through hard effort versus dumb luck — everything is obviously not just a matter of luck. I suspect that we can easily confuse these very different notions of “luck” at an intuitive level. Consequently, unreserved claims about everything being a matter of luck risk having unfortunate effects, such as leading us to downplay the value of effort.

Similar pitfalls exist for the claim “you could not have done otherwise”. What we often mean by this claim is that “this event would have happened even if you had done things differently“. In other words: the environment constrained you, and your efforts were immaterial. This is very different from saying, for example, “you could not have done otherwise because your deepest values compelled you” — meaning: the environment may well have allowed alternative possibilities, but your core values did not. The latter is often true of our actions, yet it is in many ways the opposite of the environment-constrained sense of “you could not have done otherwise”.

Hence, confusion is likely to emerge if someone simply declares “you could not have done otherwise” about all actions without qualification, since it risks obscuring the important distinction between constraints posed by our values versus constraints posed by our environment. Moreover, it may obscure the fact that we in many of our past choices indeed did have other possibilities than the ones we chose, in the sense of alternative possibilities afforded by our environment. Failing to acknowledge such possibilities in our past choices may well be detrimental to our future choices, as it might keep us acting in needlessly limited and habit-bound ways due to false ideas about which paths are open to us.

Conversely, there are also pitfalls in the opposite direction. For example, when someone says “we have ex-ante possibilities, and such possibilities play a crucial role in our decision-making”, some people might mistakenly interpret this as implying “our actions are independent of prior causes, and this is crucial for our decision-making”. This may in turn lead to the above-mentioned mistakes that the prior-causes emphasis can help us avoid, such as misunderstanding our physical nature and entertaining unreasonable ideas about how we can expect people to act.

Conclusion

In sum, we have good reasons to be careful in our communication about “free will”, and to clearly flag these non sequiturs. “Our actions emerge from prior causes” does not mean “we have no ex-ante possibilities”, and “we have ex-ante possibilities” does not imply “we are independent of prior causes”. Navigating reality effectively requires that we integrate an understanding of prior causes with a pragmatic focus on our ex-ante possibilities.


Acknowledgments: Thanks to Mikkel Vinding for comments.

On Insects and Lexicality

“Their experiences may be more simple than ours, but are they less intense? Perhaps a caterpillar’s primitive pain when squashed is greater than our more sophisticated sufferings.”

— Richard Ryder, Painism: A Modern Morality, p. 64.

Many people, myself included, find it plausible that suffering of a certain intensity, such as torture, carries greater moral significance than any amount of mild suffering. One may be tempted to think that views of this kind imply we should primarily prioritize the beings most likely to experience these “lexically worse” states of suffering (LWS) — presumably beings with large brains.* By extension, one may think such views will generally imply little priority to beings with small, less complex brains, such as insects. (Which is probably also a view we would intuitively like to embrace, given the inconvenience of the alternative.) 

Yet while perhaps intuitive, I do not think this conclusion follows. The main argument against it, in my view, is that we should maintain a non-trivial probability that beings with small brains, such as insects, indeed can experience LWS (regardless of how we define these states). After all, on what grounds can we confidently maintain they cannot?

And if we then assume an expected value framework, and multiply the large number of insects by a non-trivial probability of them being able to experience LWS, we find that, in terms of presently existing beings, the largest amount of LWS in expectation may well be found in small beings such as insects.


* It should be noted in this context, though, that many humans ostensibly cannot feel (at least physical) pain, whereas many beings with smaller brains show every sign of having this capacity, which suggests brain size is a poor proxy for the ability to experience pain, let alone the ability to experience LWS, and that genetic variation in certain pain-modulating genes may well be a more important factor.


More literature

On insects:

The Importance of Insect Suffering
Reducing Suffering Amongst Invertebrates Such As Insects
Do Bugs Feel Pain?
How to Avoid Hurting Insects
The Moral Importance of Invertebrates Such as Insects

On Lexicality:

Value Lexicality
Clarifying lexical thresholds
Many-valued logic as a reply to sequence arguments in value theory
Lexicality between mild discomfort and unbearable suffering: A variety of possible views
Lexical priority to extreme suffering — in practice

Physics Is Also Qualia

In this post, I seek to clarify what I consider to be some common confusions about consciousness and “physics” stemming from a failure to distinguish clearly between ontological and epistemological senses of “physics”.

Clarifying Terms

Two senses of the word “physics” are worth distinguishing. There is physics in an ontological sense: roughly speaking, the spatio-temporal(-seeming) world that in many ways conforms well to our best physical theories. And then there is physics in an epistemological sense: a certain class of models we have of this world, the science of physics.

“Physics” in this latter, epistemological sense can be further divided into 1) the physical models we have in our minds, versus 2) the models we have external to our minds, such as in our physics textbooks and computer simulations. Yet it is worth noting that, to the extent we ourselves have any knowledge of the models in our books and simulations, we only have this knowledge by representing it in our minds. Thus, ultimately, all the knowledge of physical models we have, as subjects, is knowledge of the first kind: as appearances in our minds.*

In light of these very different senses of the term “physics”, it is clear that the claim that “physics is also qualia” can be understood in two very different ways: 1) in the sense that the physical world, in the ontological sense, is qualia, or “phenomenal”, and 2) that our models of physics are qualia, i.e. that our models of physics are certain patterns of consciousness. The first of these two claims is surely the most controversial one, and I shall not defend it here; I explore it here and here.

Instead, I shall here focus on the latter claim. My aim is not really to defend it, as I already briefly did that above: all the knowledge of physics we have, as subjects, ultimately appears as experiential patterns in our minds. (Although talk of the phenomenology of, say, operations in Hilbert spaces admittedly is rare.) I take this to be obvious, and hit an impasse with anyone who disagrees. My aim here is rather to clarify some confusions that arise due to a lack of clarity about this, and due to conflations of the two senses of “physics” described above.

The Problem of Reduction: Epistemological or Ontological?

I find it worth quoting the following excerpt from a Big Think interview with Sam Harris. Not because there is anything atypical about what Harris says, but rather because I think he here clearly illustrates the prevailing lack of clarity about the distinction between epistemology and ontology in relation to “the physical”.

If there’s an experiential internal qualitative dimension to any physical system then that is consciousness. And we can’t reduce the experiential side to talk of information processing and neurotransmitters and states of the brain […]. Someone like Francis Crick said famously you’re nothing but a pack of neurons. And that misses the fact that half of the reality we’re talking about is the qualitative experiential side. So when you’re trying to study human consciousness, for instance, by looking at states of the brain, all you can do is correlate experiential changes with changes in brain states. But no matter how tight these correlations become that never gives you license to throw out the first person experiential side. That would be analogous to saying that if you just flipped a coin long enough you would realize it had only one side. And now it’s true you can be committed to talking about just one side. You can say that heads being up is just a case of tails being down. But that doesn’t actually reduce one side of reality to the other.

Especially worth resting on here is the statement “half of the reality we’re talking about is the qualitative experiential side.” Yet is this “half of reality” an “ontological half” or an “epistemological half”? That is, is there a half of reality out there that is part phenomenal, and part “non-phenomenal” — perhaps “inertly physical”? Or are we rather talking about two different phenomenal descriptions of the same thing, respectively 1) physico-mathematical models of the mind-brain (and these models, again, are also qualia, i.e. patterns of consciousness), and 2) all other phenomenal descriptions, i.e. those drawing on the countless other experiential modalities we can currently conceive of — emotions, sounds, colors, etc. — as well as those we can’t? I suggest we are really talking about two different descriptions of the same thing.

A similar question can be raised in relation to Harris’ claim that we cannot “reduce one side of reality to the other.” Is the reduction in question, or rather failure of reduction, an ontological or an epistemological one? If it is ontological, then it is unclear what this means. Is it that one side of reality cannot “be” the other? This does not appear to be Harris’ view, even if he does tacitly buy into ontologically distinct sides (as opposed to descriptions) of reality in the first place.

Yet if the failure of reduction is epistemological, then there is in fact little unusual about it, as failures of epistemological reduction, or reductions from one model to another, are found everywhere in science. In the abstract sciences, for example, one axiomatic system does not necessarily reduce to another; indeed, we can readily create different axiomatic systems that not only fail to reduce to each other yet which actively contradict each other. And hence we cannot derive all of mathematics, broadly construed, from a single axiomatic system.

Similarly, in the empirical sciences, economics does not “reduce to” quantum physics. One may object that economics does reduce to quantum physics in principle, yet it should then be noted that 1) the term “in principle” does an enormous amount of work here, arguably about as much as it would have to do in the claim that “quantum physics can explain consciousness in principle” — after all, physics and economics invoke very different models and experiential modalities (economic theories are often qualitative in nature, and some prominent economists have even argued they are primarily so). And 2) a serious case can be made against the claim that even all the basic laws found in chemistry, the closest neighbor of physics, can be derived from fundamental physical theories, even in principle (see e.g. Berofsky, 2012, chap. 8). This case does not rest on there being something mysterious going on between our transition from theories of physics to theories of chemistry, nor that new fundamental forces are implicated, but merely that our models in these respective fields contain elements not reducible, even in principle, to our models in other areas.

Thus, at the level of our minds, we can clearly construct many different mental models which we cannot reduce to each other, even in principle. Yet this merely says something about our models and epistemology. It hardly comprises a deep metaphysical mystery.

Denying the Reality of Consciousness

The fact that the world conforms, at least roughly, to description in “physical” terms seems to have led some people to deny that consciousness in general exists. Yet this, I submit, is a fallacy: the fact that we can model the world in one set of terms which describe certain of its properties does not imply that we cannot describe it in another set of terms that describe other properties truly there as well, even if we cannot derive one from the other.

By analogy, consider again physics and economics: we can take the exact same object of study — say, a human society — and describe aspects of it in physical terms (with models of thermodynamics, classical mechanics, electrodynamics, etc.), yet we cannot from any such description or set of descriptions meaningfully derive a description of the economics of this society. It would clearly be a fallacy to suggest that this implies facts of economics cannot exist.

Again, I think the confusion derives from conflating epistemology with ontology: “physics”, in the epistemological sense of “descriptions of the world in physico-mathematical terms”, appears to encompass “everything out there”, and hence, the reasoning goes, nothing else can exist out there. Of course, in one sense, this is true: if a description in physico-mathematical terms exhaustively describes everything out there, then there is indeed nothing more to be said about it — in physico-mathematical terms. Yet this says nothing about the properties of what is out there in other terms, as illustrated by the economics example above. (Another reason some people seem to deny the reality of consciousness, distinct from conflation of the epistemological and the ontological, is “denial due to fuzziness”, which I have addressed here.)

This relates, I think, to the fundamental Kantian insight on epistemology: we never experience the world “out there” directly, only our own models of it. And the fact that our physical model of the world — including, say, a physical model of the mind-brain of one’s best friend — does not entail other phenomenal modalities, such as emotions, by no means implies that the real, ontological object out there which our physical model reflects, such as our friend’s actual mind-brain, does not instantiate these things. That would be to confuse the map with the territory. (Our emotional model of our best friend does, of course, entail emotions, and it would be just as much of a fallacy to say that, since such emotional models say nothing about brains in physical terms, descriptions of the latter kind have no validity.)

Denials of this sort can have serious ethical consequences, not least since the most relevant aspects of consciousness, including suffering, fall outside descriptions of the world in purely physical terms. Thus, if we insist that only such physico-mathematical descriptions truly describe the world, we seem forced to conclude that suffering, along with everything else that plausibly has moral significance, does not truly exist. Which, in turn, can keep us from working toward a sophisticated understanding of these things, and from creating a better world accordingly.

 


* And for this reason, the answer to the question “how do you know you are conscious?” will ultimately be the same as the answer to the question “how do you know physics (i.e. physical models) exist?” — we experience these facts directly.

Blog at WordPress.com.

Up ↑