Does digital or “traditional” sentience dominate in expectation?

My aim in this post is to critique two opposite positions that I think are both mistaken, or which at least tend to be endorsed with too much confidence.

The first position is that the vast majority of future sentient beings will, in expectation, be digital, meaning that they will be “implemented” in digital computers.

The second position is in some sense a rejection of the first one. Based on a skepticism of the possibility of digital sentience, this position holds that future sentience will not be artificial, but instead be “traditionally” biological — that is, most future sentient beings will, in expectation, be biological beings roughly as we know them today.

I think the main problem with this dichotomy of positions is that it leaves out a reasonable third option, which is that most future beings will be artificial but not necessarily digital.


Contents

  1. Reasons to doubt that digital sentience dominates in expectation
  2. Reasons to doubt that “traditional” biological sentience dominates in expectation
  3. Why does this matter?

Reasons to doubt that digital sentience dominates in expectation

One can roughly identify two classes of reasons to doubt that most future sentient beings will be digital.

First, there are object-level arguments against the possibility of digital sentience. For example, based on his physicalist view of consciousness, David Pearce argues that the discrete and disconnected bits of a digital computer cannot, if they remain discrete and disconnected, join together into a unified state of sentience. They can at most, Pearce argues, be “micro-experiential pixels”.

Second, regardless of whether one believes in the possibility of digital sentience, the future dominance of digital sentience can be doubted on the grounds that it is a fairly strong and specific claim. After all, even if digital sentience is perfectly possible, it by no means follows that future sentient beings will necessarily converge toward being digital.

In other words, the digital dominance position makes strong assumptions about the most prevalent forms of sentient computation in the future, and it seems that there is a fairly large space of possibilities that does not imply digital dominance, such as (a future predominance of) non-digital neuron-based computers, non-digital neuron-inspired computers, and various kinds of quantum computers that have yet to be invented.

When one takes these arguments into account, it at least seems quite uncertain whether digital sentience dominates in expectation, even if we grant that artificial sentience does.

Reasons to doubt that “traditional” biological sentience dominates in expectation

A reason to doubt that “traditional” sentience dominates is that, whatever one’s theory of sentience, it seems likely that sentience can be created artificially — i.e. in a way that we would deem artificial. (An example might be further developed and engineered versions of brain organoids.) Specifically, regardless of which physical processes or mechanisms we take to be critical to sentience, those processes or mechanisms can most likely be replicated in other systems than just live biological animals as we know them.

If we combine this premise with an assumption of continued technological evolution (which likely holds true in the future scenarios that contain the largest numbers of sentient beings), it overall seems doubtful that the majority of future beings will, in expectation, be “traditional” biological organisms — especially when we consider the prospect of large futures that involve space colonization.

More broadly, we have reason to doubt the “traditional” biological dominance position for the same reason that we have reason to doubt the digital dominance position, namely that the position entails a rather strong and specific claim along the lines that: “this particular class of sentient being is most numerous in expectation”. And, as in the case of digital dominance, it seems that there are many plausible ways in which this could turn out to be wrong, such as due to neuron-inspired or other yet-to-be-invented artificial systems that could become both sentient and prevalent.

Why does this matter?

Whether artificial sentience dominates in expectation plausibly matters for our priorities (though it is unclear how much exactly, since some of our most robust strategies for reducing suffering are probably worth pursuing in roughly the same form regardless). Yet those who take artificial sentience seriously might adopt suboptimal priorities and communication strategies if they primarily focus on digital sentience in particular.

At the level of priorities, they might restrict their focus to an overly narrow set of potentially sentient systems, and perhaps neglect the great majority of future suffering as a result. At the level of communication, they might needlessly hamper their efforts to raise concern for artificial sentience by mostly framing the issue in terms of digital sentience. This framing might lead people who are skeptical of digital sentience to mistakenly dismiss the broader issue of artificial sentience.

Similar points apply to those who believe that “traditional” biological sentience dominates in expectation: they, too, might restrict their focus to an overly narrow set of systems, and thereby neglect to consider a wide range of scenarios that may intuitively seem like science fiction, yet which nevertheless deserve serious consideration on reflection (e.g. scenarios that involve a large-scale spread of suffering due to space colonization).

In summary, there are reasons to doubt both the digital dominance position and the “traditional” biological dominance position. Moreover, it seems that there is something to be gained by not using the narrow term “digital sentience” to refer to the broader category of “artificial sentience”, and by being clear about just how much broader this latter category is.

Consciousness: Orthogonal or Crucial?

The following is an excerpt from my book Reflections on Intelligence (2016/2024).


A question that is often considered open, sometimes even irrelevant, when it comes to “AGIs” and “superintelligences” is whether such entities would be conscious. Here is Nick Bostrom expressing such a sentiment:

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences. (Bostrom, 2012, “Definition of ‘superintelligence’”)

Yet this is hardly true. If a system is “more capable than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills”, the question of consciousness is highly relevant. Consciousness is integral to much of what we do and excel at, and thus if an entity is not conscious, it cannot outperform the best humans “in practically every field”, especially not in “general wisdom” and “scientific creativity”. Let us look at these in turn.

General Wisdom

A core aspect of “general wisdom” is to be wise about ethical issues. Yet being wise about ethical issues requires that one can consider and evaluate questions like the following in an informed manner:

  • Is there anything about the experience of suffering that makes its reduction a moral priority
  • Does anything about the experience of suffering justify the claim that reducing suffering has greater moral priority than increasing happiness (for the already happy)?
  • Is there anything about states of extreme suffering that make their reduction an overriding moral priority?

It seems that one would have to be conscious in order to explore and answer such questions in an informed way. That is, one would have to know what such experiences are like in order to understand their experiential properties and significance. Knowing what a term like “suffering” refers to — i.e. knowing what actual experiences of suffering are like — is thus crucial for informed ethical reflection.

The same point holds true about other areas of philosophy that bear on wisdom, such as the philosophy of mind: without knowing what it is like to have a conscious mind, one cannot contribute much to the discussion about what it is like to have one and to the exploration of different modes of consciousness. Indeed, an unconscious entity has no genuine understanding about what the issue of consciousness is even about in the first place (Pearce, 2012a; 2012b).

So both in ethics and in the philosophy of mind, an unconscious entity would be less than clueless about many of the deepest questions at hand. If an entity not only fails to surpass humans in these areas, but fails to even have the slightest clue about what we are talking about, it hardly surpasses the best humans in practically every field. After all, questions about the phenomenology of consciousness are also relevant to many other fields, including psychology, epistemology, and ontology.

In short, experiencing and reasoning about consciousness is a key part of “human abilities”, and hence an entity that is unable to do this cannot be claimed to outperform humans in the most important, much less all, human abilities (see also Pearce, 2012a; 2012b).

Scientific Creativity

Another ability mentioned above that an unconscious entity could supposedly outdo humans at is scientific creativity. Yet scientific creativity must relate to all fields of knowledge, including the science of the conscious mind itself. This is also a part of the natural world, and a most relevant one at that.

Experiencing and accurately reporting what a given state of consciousness is like is essential for the science of mind, yet an unconscious entity obviously cannot do such a thing, as there is no experience it can report from. It cannot display any genuine scientific creativity, or even produce mere observations, in the direct exploration of consciousness.

Blog at WordPress.com.

Up ↑