Conversation with David Pearce about digital sentience and the binding problem

Whether digital sentience is possible would seem to matter greatly for our priorities, and so gaining even slightly more refined views on this matter could be quite valuable. Many people appear to treat the possibility, if not indeed the imminence, of digital sentience as a foregone conclusion. David Pearce, in contrast, is skeptical.

Pearce has written and spoken elaborately about his views on consciousness. My sense, however, is that these expositions do not always manage to clearly convey the core, and actually very simple reasons underlying Pearce’s skepticism of digital sentience. The aim of this conversation is to probe Pearce so as to shed greater — or perhaps most of all simpler — light on why he is skeptical, and thus to hopefully advance the discussions on this issue among altruists working to reduce future suffering.


MV: You are skeptical about the possibility of digital sentience. Could you explain why, in simple terms?

DP: Sure. Perhaps we can start by asking why so many people believe that our machines will become conscious (cf. https://www.hedweb.com/quora/2015.html#definition). Consciousness is widely recognised to be scientifically unexplained. But the computer metaphor of mind seems to offer us clues (cf. https://www.hedweb.com/quora/2015.html#braincomp). As far as I can tell, many if not most believers in digital sentience tend to reason along the following lines. Any well-defined cognitive task that the human mind can perform could also be performed by a programmable digital computer (cf. https://en.wikipedia.org/wiki/Turing_machine). A classical Turing machine is substrate-neutral. By “substrate-neutral”, we mean that whether a Turing machine is physically constituted of silicon or carbon or gallium oxide (etc) makes no functional difference to the execution of the program it runs. It’s commonly believed that the behaviour of a human brain can, in principle, be emulated on a classical Turing machine. Our conscious minds must be identical with states of the brain. If our minds weren’t identical with brain states, then dualism would be true (cf. https://www.hedweb.com/quora/2015.html#dualidealmat). Therefore, the behaviour of our minds can in principle be emulated by a digital computer. Moreover, the state-space of all possible minds is immense, embracing not just the consciousness of traditional and enhanced biological lifeforms, but also artificial digital minds and maybe digital superintelligence. Accordingly, the belief that non-biological information-processing machines can’t support consciousness is arbitrary. It’s unjustified carbon chauvinism.

I think most believers in digital sentience would recognise that the above considerations are not a rigorous argument for the existence of inorganic machine consciousness. The existence of machine consciousness hasn’t been derived from first principles. The “explanatory gap” is still unbridged. Yet what is the alternative?

Well, as a scientific rationalist, I’m an unbeliever. Digital computers and the software they run are not phenomenally-bound subjects of experience (cf. https://www.binding-problem.com/). Ascribing sentience to digital computers or silicon robots is, I believe, a form of anthropomorphic projection — a projection their designers encourage by giving their creations cutesy names (“Watson”, “Sophia”, “Alexa” etc). 

Before explaining my reasons for believing that digital computers are zombies, I will lay out two background assumptions. Naturally, one or both assumptions can be challenged, though I think they are well-motivated.

The first background assumption might seem scarcely relevant to your question. Perpetual direct realism is false (cf. https://www.hedweb.com/quora/2015.html#distort). Inferential realism about the external world is true. The subjective contents of your consciousness aren’t merely a phenomenally thin and subtle serial stream of logico-linguistic thought-episodes playing out behind your forehead, residual after-images when you close your eyes, inner feelings and emotions and so forth. Consciousness is also your entire phenomenal world-simulation — what naïve realists call the publicly accessible external world. Unless you have the neurological syndromes of simultanagnosia (the inability to experience more than one object at once) or akinetopsia (“motion blindness”), you can simultaneously experience a host of dynamic objects — for example, multiple players on a football pitch, or a pride of hungry lions. These perceptual objects populate your virtual world of experience from the sky above to your body-image below. Consciousness is all you directly know. The external environment is an inference, not a given.

Let’s for now postpone discussion of how our skull-bound minds are capable of such an extraordinary feat of real-time virtual world-making. The point is that if you couldn’t experience multiple feature-bound phenomenal objects — i.e. if you were just an aggregate of 86 billion membrane-bound neuronal “pixels” of experience — then you’d be helpless. Compare dreamless sleep. Like your enteric nervous system (the “brain-in-the-gut”), your mind-brain would still be a fabulously complex information-processing system. But you’d risk starving to death or getting eaten. Waking consciousness is immensely adaptive. (cf. https://www.hedweb.com/quora/2015.html#evolutionary). Phenomenal binding is immensely adaptive (cf. https://www.hedweb.com/quora/2015.html#purposecon).

My second assumption is physicalism (cf. https://www.hedweb.com/quora/2015.html#materialism). I assume the unity of science. All the special sciences (chemistry, molecular biology etc) reduce to physics. In principle, the behaviour of organic macromolecules such as self-replicating DNA can be described entirely in the mathematical language of physics without mentioning “life” at all, though such high-level description is convenient. Complications aside, no “element of reality” is missing from the mathematical formalism of our best theory of the world, quantum mechanics, or more strictly from tomorrow’s unification of quantum field theory and general relativity.

One corollary of physicalism is that only “weak” emergence is permissible. “Strong” emergence is forbidden. Just as the behaviour of programs running on your PC supervenes on the behaviour of its machine code, likewise the behaviour of biological organisms can in principle be exhaustively reduced to quantum chemistry and thus ultimately to quantum field theory. The conceptual framework of physicalism is traditionally associated with materialism. According to materialism as broadly defined, the intrinsic nature of the physical — more poetically, the mysterious “fire” in the equations — is non-experiential. Indeed, the assumption that quantum field theory describes fields of insentience is normally treated as too trivially obvious to be worth stating explicitly. However, this assumption of insentience leads to the Hard Problem of consciousness. Non-materialist physicalism (cf. https://www.hedweb.com/quora/2015.html#galileoserror) drops this plausible metaphysical assumption. If the intrinsic nature argument is sound, there is no Hard Problem of consciousness: it’s the intrinsic nature of the physical (cf. https://www.hedweb.com/quora/2015.html#definephysical ). However, both “materialist” physicalists and non-materialist physicalists agree: everything that happens in the world is constrained by the mathematical straitjacket of modern physics. Any supposedly “emergent” phenomenon must be derived, ultimately, from physics. Irreducible “strong” emergence would be akin to magic.

Anyhow, the reason I don’t believe in digital minds is that classical computers are, on the premises outlined above, incapable of phenomenal binding. If we make the standard assumption that their 1 and 0s and logic gates are non-experiential, then digital computers are zombies. Less obviously, digital computers are zombies if we don’t make this standard assumption! Imagine, fancifully, replacing non-experiential 1s and 0s of computer software with discrete “pixels” of experience. Run the program as before. The upshot will still be a zombie, more technically a micro-experiential zombie. What’s more, neither increasing the complexity of the code nor exponentially increasing the speed of its execution could cause discrete “pixels” somehow to blend into each other in virtue of their functional role, let alone create phenomenally-bound perceptual objects or a unitary self experiencing a unified phenomenal world. The same is true of a connectionist system (cf. https://en.wikipedia.org/wiki/Connectionism), supposedly more closely modelled on the brain — however well-connected and well-trained the network, and regardless whether its nodes are experiential or non-experiential. The synchronous firing of distributed feature-processors in a “trained up” connectionist system doesn’t generate a unified perceptual object — again on pain of “strong” emergence. AI programmers and roboticists can use workarounds for the inability of classical computers to bind, but they are just that: workarounds.

Those who believe in digital sentience can protest that we don’t know that phenomenal minds can’t emerge at some level of computational abstraction in digital computers. And they are right! If abstract objects have the causal power to create conscious experience, then digital computer programs might be subjects of experience. But recall we’re here assuming physicalism. If physicalism is true, then even if consciousness is fundamental to the world, we can know that digital computers are — at most — micro-experiential zombies.

Of course, monistic physicalism may be false. “Strong” emergence may be real. But if so, then reality seems fundamentally lawless. The scientific world-picture would be false.

Yet how do biological minds routinely accomplish binding if phenomenal binding is impossible for any classical digital computer (cf. https://en.wikipedia.org/wiki/Universal_Turing_machine). Even if our neurons support rudimentary “pixels” of experience, why aren’t animals like us in the same boat as classical digital computers or classically parallel connectionist systems?

I can give you my tentative answer. Naïvely, it’s the reductio ad absurdum of quantum mind: “Schrödinger’s neurons”: https://www.hedweb.com/quora/2015.html#quantumbrain.

Surprisingly, it’s experimentally falsifiable via interferometry: https://en.wikipedia.org/wiki/Quantum_mind#David_Pearce 

Yet the conjecture I explore may conceivably be of interest only to someone who already feels the force of the binding problem. Plenty of researchers would say it’s a ridiculous solution to a nonexistent problem. I agree it’s crazy; but it’s worth falsifying. Other researchers just lump phenomenal binding together with the Hard Problem (cf. https://www.hedweb.com/quora/2015.html#categorize) as one big insoluble mystery they suppose can be quarantined from the rest of scientific knowledge.

I think their defeatism and optimism alike are premature. 

MV: Thanks, David. A lot to discuss there, obviously.

Perhaps the most crucial point to really appreciate in order to understand your skepticism is that you are a strict monist about reality. That is, “the experiential” is not something over and above “the physical”, but rather identical with it (which, to be clear, does not imply that all physical things have minds, or complex experiences). And so if “the mental” and “the physical” are essentially the same ontological thing, or phenomenon, under two different descriptions, then there must, roughly speaking, also be a match in terms of their topological properties.

As Mike Johnson explained your view: “consciousness is ‘ontologically unitary’, and so only a physical property that implies ontological unity … could physically instantiate consciousness.” (Principia Qualia, p. 73). (Note that “consciousness” here refers to an ordered, composite mind; not phenomenality more generally.)

Conversely, a system that is physically discrete or disconnected — say, a computer composed of billiard balls that bump into each other, or lighthouses that exchange signals across hundreds of kilometers — could not, on your view, support a unitary mind. In terms of the analogy of thinking about consciousness as waves, your view is roughly that we should think of a unitary mind as a large, composite wave of sorts, akin to a song, whereas disconnected “pixels of experience” are like discrete microscopic proto-waves, akin to tiny disjoint blobs of sound. (And elsewhere you quote Seth Lloyd saying something similar about classical versus quantum computations: “A classical computation is like a solo voice — one line of pure tones succeeding each other. A quantum computation is like a symphony — many lines of tones interfering with one another.”)

This is why you say that “computer software with discrete ‘pixels’ of experience will still be a micro-experiential zombie”, and why you say that “even if consciousness is fundamental to the world, we can know that digital computers are at most micro-experiential zombies” — it’s because of this physical discreteness, or “disconnectedness”.

And this is where it seems to me that the computational view of mind is also starkly at odds with common sense, as well as with monism. For it seems highly counterintuitive to claim that billiard balls bumping into each other, or lighthouses separated by hundreds of kilometers that exchange discrete signals, could, even in principle, mediate a unitary mind. I wonder whether most people who hold a computational view of mind are really willing to bite this bullet. (Such views have also been elaborately criticized by Mike Johnson and Scott Aaronson — critiques that I have seen no compelling replies to.)

It also seems non-monistic in that it appears impossible to give a plausible account of where a unitary mind is supposed to be found in this picture (e.g. in a picture with discrete computations occurring serially over long distances), except perhaps as a separate, dualist phenomenon that we somehow map onto a set of physically discrete computations occurring over time, which seems to me inelegant and unparsimonious. Not to mention that it gives rise to an explosion of minds, as we can then see minds in a vast set of computations that are somehow causally connected across time and space, with the same computations being included in many distinct minds. This picture is at odds with a monist view that implies a one-to-one correspondence between concrete physical state and concrete mental state — or rather, which sees these two sides as distinct descriptions of the exact same reality.

The question is then how phenomenal binding could occur. You explore a quantum mind hypothesis involving quantum coherence. So what are your reasons for thinking that quantum coherence is necessary for phenomenal binding? Why would, say, electromagnetic fields in a synchronous state not be enough?

DP: If the phenomenal unity of mind is an effectively classical phenomenon, then I have no idea how to derive the properties of our phenomenally bound minds from decohered, effectively classical neurons — not even in principle, let alone in practice. 

MV: And why is that? What is it that makes deriving the properties of our phenomenally bound minds seem feasible in the case of coherent states, unlike in the case of decohered ones?

DP: Quantum coherent states are individual states — i.e. fundamental physical features of the world — not mere unbound aggregates of classical mind-dust. On this story, decoherence (cf. https://arxiv.org/pdf/1911.06282.pdf) explains phenomenal unbinding.

MV: So it is because only quantum coherent states could constitute the “ontological unity” of a unitary, “bound” mind. Decoherent states, on your view, are not and could not be ontologically unitary in the required sense?

DP: Yes!

Digital computing depends on effectively classical, decohered individual bits of information, whether as implemented in Turing’s original tape set-up, a modern digital computer, or indeed if the world’s population of skull-bound minds agree to participate in an experiment to see if a global mind can emerge from a supposed global brain.

One can’t create perceptual objects, let alone unified minds, from classical mind-dust even if strictly the motes of decohered “dust” are only effectively classical, i.e. phase information has leaked away into the environment. If the 1s and 0s of a digital computer are treated as discrete micro-experiential pixels, then when running a program, we don’t need to consider the possibility of coherent superpositions of 1s and 0s/ micro-experiences. If the bits weren’t effectively classical and discrete, then the program wouldn’t execute.

MV: In other words, you are essentially saying that binding/unity between decohered states is ultimately no more tenable than binding/unity between, say, two billard balls separated by a hundred miles? Because they are in a sense similarly ontologically separate?

DP: Yes!

MV: So to summarize, your argument is roughly the following: 

  1. observed phenomenal binding, or a unitary mind, combined with 
  2. an empirically well-motivated monistic physicalism, means that
  3. we must look for a unitary physical state as the “mediator”, or rather the physical description, of mind [since the ontological identity from (2) implies that the phenomenal unity from (1) must be paralleled in our physical description], and it seems that
  4. only quantum coherent states could truly fit the bill of such ontological unity in physical terms.

DP: 1 to 4, yes!

MV: Cool. And in step 4 in particular, to spell that out more clearly, the reasoning is roughly that classical states are effectively (spatiotemporally) serial, discrete, disconnected, etc. Quantum coherent states, in contrast, are a connected, unitary, individual whole.

Classical bits in a sense belong to disjoint “ontological sets”, whereas qubits belong to the same “ontological set” (as I’ve tried to illustrate somewhat clumsily below, and in line with Seth Lloyd’s quote above).

Is that a fair way to put it?

DP: Yes!

I sometimes say who will play Mendel to Zurek’s Darwin is unknown. If experience discloses the intrinsic nature of the physical, i.e. if non-materialist physicalism is true, then we must necessarily consider the nature of experience at what are intuitively absurdly short timescales in the CNS. At sufficiently fine-grained temporal resolutions, we can’t just assume the existence of decohered macromolecules, neurotransmitters, receptors, membrane-bound neurons etc. — they are weakly emergent, dynamically stable patterns of “cat states”. These high-level patterns must be derived from quantum bedrock — which of course I haven’t done. All I’ve done is make a “philosophical” conjecture that (1) quantum coherence mediates the phenomenal unity of our minds; and (2) quantum Darwinism (cf. https://www.sciencemag.org/news/2019/09/twist-survival-fittest-could-explain-how-reality-emerges-quantum-haze) offers a ludicrously powerful selection-mechanism for sculpting what would otherwise be mere phenomenally-bound “noise”.

MV: Thanks for that clarification.

I guess it’s also worth stressing that you do not claim this to be any more than a hypothesis, while you at the same time admit that you have a hard time seeing how alternative accounts could explain phenomenal binding.

Moreover, it’s worth stressing that the conjecture resulting from your line of reasoning above is in fact, as you noted, a falsifiable one — a rare distinction for a theory of consciousness.

A more general point to note is that skepticism about digital sentience need not be predicated on the conjecture you presented above, as there are other theories of mind — not necessarily involving quantum coherence — that also imply that digital computers are unable to mediate a conscious mind (including some of the theories hinted at above, and perhaps other, more recent theories). For example, one may accept steps 1-3 in the argument above, and then be more agnostic in step 4, with openness to the possibility that binding could be achieved in other ways, yet while still considering contemporary digital computers unlikely to be able to mediate a unitary mind (e.g. because of the fundamental architectural differences between such computers and biological brains).

Okay, having said all that, let’s now move on to a slightly different issue. Beyond digital sentience in particular, you have also expressed skepticism regarding artificial sentience more generally (i.e. non-digital artificial sentience). Can you explain the reasons for this skepticism?

DP: Well, aeons of posthuman biological minds probably lie ahead. They’ll be artificial — genetically rewritten, AI-augmented, most likely superhumanly blissful, but otherwise inconceivably alien to Darwinian primitives. My scepticism is about the supposed emergence of minds in classical information processors — whether programmable digital computers, classically parallel connectionist systems or anything else.

What about inorganic quantum minds? Well, I say a bit more e.g. here: https://www.hedweb.com/quora/2015.html#nonbiological

A pleasure-pain axis has been so central to our experience that sentience in everything from worms to humans is sometimes (mis)defined in terms of the capacity to feel pleasure and pain. But essentially, I see no reason to believe that such (hypothetical) phenomenally bound consciousness in future inorganic quantum computers will support a pleasure-pain axis any more than, say, the taste of garlic.

In view of our profound ignorance of physical reality, however, I’m cautious: this is just my best guess!

MV: Interesting. You note that you see no reason to believe that such systems would have a pleasure-pain axis. But what about the argument that pain has proven exceptionally adaptive over the course of biological evolution, and might thus plausibly prove adaptive in future forms of evolution as well (assuming things won’t necessarily be run according to civilized values)? 

DP: Currently, I can’t see any reason to suppose hedonic tone (or the taste of garlic) could be instantiated in inorganic quantum computers. If (a big “if”) the quantum-theoretic version of non-materialist physicalism is true, then subjectively it’s like something to be an inorganic quantum computer, just as it’s like something subjectively to be superfluid helium — a nonbiological macro-quale. But out of the zillions of state-spaces of experience, why expect the state-space of phenomenally-bound experience that inorganic quantum computers hypothetically support will include hedonic tone? My guess is that futuristic quantum computers will instantiate qualia for which humans have no name nor conception and with no counterpart in biological minds.

All this is very speculative! It’s an intuition, not a rigorous argument.

MV: Fair enough. What then is your view of hypothetical future computers built from biological neurons?

DP: Artificial organic neuronal networks are perfectly feasible. Unlike silicon-based “neural networks” — a misnomer in my view — certain kinds of artificial organic neuronal networks could indeed suffer. Consider the reckless development of “mini-brains”.

MV: Yeah, it should be uncontroversial that such developments entail serious risks.

Okay, David. What you have said here certainly provides much food for thought. Thanks a lot for patiently exploring these issues with me, and not least for all your work and your dedication to reducing the suffering of all sentient beings.

DP: Thank you, Magnus. You’re very kind. May I just add a recommendation? Anyone who hasn’t yet done so should read your superb Suffering-Focused Ethics (2020).

Consciousness – Orthogonal or Crucial?

The following is an excerpt from my book Reflections on Intelligence (2016/2020).

 

A question often considered open, sometimes even irrelevant, when it comes to “AGIs” and “superintelligences” is whether such entities would be conscious. Here is Nick Bostrom expressing such a sentiment:

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

(Bostrom, 2012, “Definition of ‘superintelligence’”)

This is false, however. On no meaningful definition of “more capable than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills” can the question of consciousness be considered irrelevant. This is like defining a “superintelligence” as an entity “smarter” than any human, and to then claim that this definition leaves open whether such an entity can read natural language or perform mathematical calculations. Consciousness is integral to virtually everything we do and excel at, and thus if an entity is not conscious, it cannot possibly outperform the best humans “in practically every field”. Especially not in “scientific creativity, general wisdom, and social skills”. Let us look at these three in turn.

Social Skills

Good social skills depend on an ability to understand others. And in order to understand other people, we have to simulate what it is like to be them. Fortunately, this comes quite naturally to most of us. We know what it is like to consciously experience emotions such as sadness, fear, and joy directly, and this enables us to understand where people are coming from when they report and act on these emotions.

Consider the following example: without knowing anything about a stranger you observe on the street, you can roughly know how that person would feel and react if they suddenly, by the snap of a finger, had no clothes on right there on the street. Embarrassment, distress, wanting to cover up and get away from the situation are almost certain to be the reaction of any randomly selected person. We know this, not because we have read about it, but because of our immediate simulations of the minds of others – one of the main things our big brains evolved to do. This is what enables us to understand the minds of other people, and hence without running this conscious simulation of the minds of others, one will have no chance of gaining good social skills and interpersonal understanding.

But couldn’t a computer just simulate people’s brains and then understand them without being conscious? Is the consciousness bit really relevant here?

Yes, consciousness is relevant. At the very least, it is relevant for us. Consider, for instance, the job of a therapist, or indeed the “job” of any person who attempts to listen to another person in a deep conversation. When we tell someone about our own state or situation, it matters deeply to us that the listener actually understands what we are saying. A listener who merely pretends to feel and understand would be no good. Indeed, this would be worse than no good, as such a “listener” would then essentially be lying and deceiving in a most insensitive way, in every sense of the word.

Frustrated Human: “Do you actually know the feeling I’m talking about here? Do you even know the difference between joy and hopeless despair?”

Unconscious liar: “Yes.”

Whether someone is actually feeling us when we tell them something matters to us, especially when it comes to our willingness to share our perspectives, and hence it matters for “social skills”. An unconscious entity cannot have better social skills than “the best human brains” because it would lack the very essence of social skills: truly feeling and understanding others. Without a conscious mind there is no way to understand what it is like to have such a mind.

General Wisdom

Given how relevant social skills are for general wisdom, and given the relevance of consciousness for social skills, the claim that consciousness is irrelevant to general wisdom should already stand in serious doubt at this point.

Yet rather than restricting our focus to “general wisdom”, let us consider ethics in its entirety, which, broadly construed at least, includes any relevant sense of “general wisdom”. For in order to reason about ethics, one must be able to consider and evaluate questions like the following:

Can certain forms of suffering be outweighed by a certain amount of happiness?

Does the nature of the experience of suffering in some sense demand that reducing suffering is given greater moral priority than increasing happiness (for the already happy)?

Can realist normative claims be made on the basis of the properties of such experiences?

One has to be conscious to answer such questions. That is, one must know what such experiences are like in order to understand their experiential properties and significance. Knowing what terms like “suffering” and “happiness” refer to – i.e. knowing what the actual experiences of suffering and happiness are like – is as crucial to ethics as numbers are to mathematics.

The same point holds true about other areas of philosophy that bear on wisdom, such as the philosophy of mind: without knowing what it is like to have a conscious mind, one cannot contribute to the discussion about what it is like to have one and what the nature of consciousness is. Indeed, an unconscious entity has no idea about what the issue is even about in the first place.

So both in ethics and in the philosophy of mind, an unconscious entity would be less than clueless about the deep questions at hand. If an entity not only fails to surpass humans in this area, but fails to even have the slightest clue about what we are talking about, it hardly surpasses the best human brains in practically every field. After all, these questions are also relevant to many other fields, ranging from questions in psychology to questions concerning the core foundations of knowledge.

Experiencing and reasoning about consciousness is a most essential part of “human abilities”, and hence an entity that cannot do this cannot be claimed to surpass humans in the most important, much less all, human abilities.

Scientific Creativity

The third and final ability mentioned above that an unconscious entity can supposedly surpass humans in is scientific creativity. Yet scientific creativity must relate to all fields of knowledge, including the science of the conscious mind itself. This is also a part of the natural world, and a most relevant one at that.

Experiencing and accurately reporting what a given state of consciousness is like is essential for the science of mind, yet an unconscious entity obviously cannot do such a thing, as there is no experience it can report from. It cannot display any scientific creativity, or even produce mere observations, in this most important science. Again, the most it can do is produce lies – the very anti-matter of science.

 

Free Will: Emphasizing Possibilities

I suspect the crux of discussions and worries about (the absence of) “free will” is the issue of possibilities. I also think it is a key source of confusion. Different people are talking about possibilities in different senses without being clear about it, which leads them to talk past each other, and perhaps even to confuse and dispirit laypeople by making them feel they have no possibilities in any sense whatsoever.

Different Emphases

Thinkers who take different positions on free will tend to emphasize different things. One camp tends to say “we don’t have free will, since all our actions are caused by prior causes that are ultimately beyond our own control, and in this there are no ‘alternative possibilities'”.

Another camp, so-called compatibilists, will tend to agree with the latter point about prior causes, but they choose to emphasize possibilities: “complex agents can act within a range of possibilities in a way crude objects like rocks cannot, and such agents truly do weigh and choose between these options”.

In essence, what I think the latter camp is emphasizing is the fact that we have ex-ante possibilities: a range of possibilities we can choose from in expectation. (For example, in a game of chess, your ex-ante possibilities are comprised by the set of moves allowed by the rules of the game.) And since this latter camp defines free will roughly as the ability to make choices among such ex-ante possibilities, they conclude that we indeed do have free will.

I doubt any philosopher arguing against the existence of free will would deny the claim that we have ex-ante possibilities. After all, we all conceive of various possibilities in our minds that we weigh and choose between, and we indeed cannot talk meaningfully about ethics, or choices in general, without such a framework of ex-ante possibilities. (Whether possibilities exist in any other sense than ex ante, and whether this is ethically relevant, are separate questions.)

Given the apparent agreement on these two core points — 1) our actions are caused by prior causes, and 2) we have ex-ante possibilities — the difference between the two camps mostly seems to lie in how they define free will and whether they prefer to emphasize 1) or 2).

The “Right” Definition of Free Will

People in these two camps will often insist that their definition of free will is the one that matches what most people mean by free will. I think both camps are right and wrong about this. I think it is misguided to think that most people have anything close to a clear definition of free will in their minds, as opposed to having a jumbled network of associations that relate to a wide range of notions, including notions of independence from prior causes and notions of ex-ante possibilities.

Experimental philosophy indeed also hints at a much more nuanced picture of people’s intuitions and conceptions of “free will”, and reveals them to be quite unclear and conflicting, as one would expect.

Emphasizing Both

I believe the two distinct emphases outlined above are both important yet insufficient on their ownThe emphasis on prior causes is important for understanding the nature of our choices and actions. In particular, it helps us understand that our choices do not comprise a break with physical mechanism, but that they are indeed the product of complex such mechanisms (which include the mechanisms of our knowledge and intentions, as well as the mechanism of weighing various ex-ante possibilities).

In turn, this emphasis may help free us from certain bad ideas about human choices, such as naive ideas about how anyone can always pull themselves up by their bootstraps. It may also help us construct better incentives and institutions based on an actual understanding of the mechanism of our choices rather than supernatural ideas about them. Lastly, it may help us become more understanding toward others, such as by reminding us that we cannot reasonably expect people to act on knowledge they do not possess.

Similarly, emphasizing our ex-ante possibilities is important for our ability to make good decisions. Mistakingly believing that one has only one possibility, ex ante, rather than thinking through all possibilities will likely lead to highly sub-optimal outcomes, whether it be in a game of chess or a major life decision. Aiming to choose the ex-ante possibility that seems best in expectation is crucial for us to make good choices. Indeed, this is what good decision-making is all about.

More than that, an emphasis on ex-ante possibilities can also help instill in us the healthy and realistic versions of bootstrap-pulling attitudes, namely that hard work and dedication indeed are worthwhile and truly can lead us in better directions.

Both Emphases Have Pitfalls (in Isolation)

Our minds intuitively draw inferences and associations based on the things we hear. When it comes to “free will”, I suspect most of us have quite leaky conceptual networks, in that the distinct clusters of sentiments we intuitively tie to the term “free will” readily cross-pollute each other — a form of sentiment synesthesia.

So when someone says “we don’t have free will, everything is caused by prior causes”, many people may naturally interpret this as implying “we don’t have ex-ante possibilities, and so we cannot meaningfully think in terms of alternative possibilities”, even though this does not follow. This may in turn lead to bad decisions and feelings of disempowerment. It may also lead people to think that it makes no sense to punish people, or that we cannot meaningfully say things like “you really should have made a better choice”. Yet these things do make sense. They serve to create incentives by making a promise for the future — “people who act like this will pay a price” — which in turn nudges people toward some of their ex-ante possibilities over others.

More than that, a naive emphasis on the causal origins of our actions may also lead people to think that certain feelings — such as pride, regret, and hatred — are always unreasonable and should never be entertained. Yet this does not follow either. Indeed, these feelings likely have great utility in some circumstances, even if such circumstances are rare.

A similar source of confusion is to say that our causal nature implies that everything is just a matter of luck. Although this is true in some ultimate sense, in another sense — the everyday sense that distinguishes between things won through hard effort versus dumb luck — everything is obviously not just a matter of luck. And I suspect most people’s intuitive associations can also be leaky between these very different notions of “luck”. Consequently, unreserved claims about everything being a matter of luck also risk having unfortunate effects, such as leading us to underemphasize the importance of effort.

Such pitfalls also exist relative to the claim “you could not have done otherwise”. For what we often mean by this claim, when we talk about specific events in everyday conversations, is that “this event would have happened even if you had done things differently” (that is: the environment constrained you, and your efforts were immaterial). This is very different from saying, for example, “you could not have done otherwise because your deepest values compelled you” (meaning: the environment may well have allowed alternative possibilities, but your values did not). The latter is often true of our actions, yet it is in many ways the very opposite of what we usually mean by “you could not have done otherwise”.

Hence, confusion is likely to emerge if someone simply declares “you could not have done otherwise” about all actions without qualification. And such confusion may well persist even in the face of explicit qualifications, since confusions deep down at the intuitive level may not be readily undone by just a few cerebral remarks.

Conversely, there are also pitfalls of sentiment leakiness in the opposite direction. When someone says “ex-ante possibilities are real, and they play a crucial role in our decision-making”, people may naturally interpret this as implying “our actions are not caused by prior causes, and this is crucial for our decision-making”. And this may in turn lead to the above-mentioned mistakes that the prior-causes emphasis can help us avoid: misunderstanding our mechanistic nature and failing to act on such an understanding, as well as entertaining unreasonable ideas about how we can expect people to act.

 

This is why one has to be careful in one’s communication about “free will”, and to clearly flag these non sequiturs. “We are caused by prior causes” does not mean “we have no ex-ante possibilities”, and conversely, “we have ex-ante possibilities” does not imply “we are not caused by prior causes”.

 


Acknowledgments: Thanks to Mikkel Vinding for comments.

Thinking of Consciousness as Waves

First written: Dec 14, 2018, Last update: Jan 2, 2019.

 

How can we think about the relationship between the conscious and the physical? In this essay I wish to propose a way of thinking about it that might be fruitful and surprisingly intuitive, namely to think of consciousness as waves.

The idea is quite simple: one kind of conscious experience corresponds to, or rather conforms to description in terms of, one kind of wave. And by combining different kinds of waves, we can obtain an experience with many different properties in one.

It should be noted that I in this post merely refer to waves in an abstract sense to illustrate a general point. That is, I do not refer to electromagnetic waves in particular (as some theories of consciousness do), nor to quantum waves (as other theories do), nor to any other particular kind of wave (such as Selen Atasoy’s so-called connectome-specific harmonic waves*). The point here is not what kind of wave, or indeed which physical state in general, that mediates different states of consciousness. The point is merely to devise a metaphor that can render intuitive the seemingly unintuitive, namely: how can we get something complex and multifaceted from something very simple without having anything seemingly spooky or strange, such as strong emergence, in between? In particular, how can we say that brains mediate conscious experience without saying that, say, electrons mediate conscious experience? I believe thinking about consciousness in terms of waves can help dissolve this confusion. 

The magic of waves is that we can produce (or to an arbitrary level of precision approximate) any kind of complex, multifaceted wave by adding simple sine waves together.

 

Image result for waves sine
Sine waves with different frequencies.

 

In this way, it is possible, for instance, to decompose any recorded song — itself a complex, multifaceted wave — into simple, tedious-sounding sine waves. Each resulting sine wave can be said to comprise an aspect of the song, yet not in any recognizable way. The whole song is in fact a sum of such waves, not in a strange way that implies strong emergence, but merely in a complicated, composite way.

Another way to think about waves that can help us think more clearly about emergent complexity is to think of a wave that is very small in both amplitude and duration. If this were a sound wave, it would be an extremely short-lived, extremely low-volume sound. On a visual representation of an entire song file, this sound would look more akin to a dot than a wave.

 

Image result for a point math
A dot.

 

And such simple sound waves can also be put together so as to create a song (for instance, one can take the sine waves obtained by decomposing a song and then chop them into smaller bits and decrease their amplitude). It will just, to make a song, take a very great number of such small waves superimposed (if the song is to be loud enough to hear) and in succession (if the song is to last for more than a split-second).

 

The deeper point here is that waves are waves, no matter how small or simple, large or complex. Yet not all waves comprise what we would recognize as music. Similarly, even if all physical states are phenomenal in the broadest sense, this does not imply that they are conscious in the sense of being an ordered, multifaceted whole. Unfortunately, we do not as yet have good, analogous terms for “sound” and “music” in the phenomenal realm — perhaps we could use “phenomenality” and “consciousness”, respectively?

The problem is indeed that we are limited by language, in that the word “conscious” usually only connotes an ordered, composite mind rather than the property of phenomenality in the most general sense. Consequently, if we think all that exists is either music or non-sound, metaphorically speaking, we are bound to be confused. But if we instead expand our vocabulary, and thereby expand our allowed ways of thinking, our confusion can, I think, be readily dissolved. If we think of the phenomenality of the simplest physical systems as being nothing like consciousness in the usual sense of a composite mind but rather as a state of hyper-crude phenomenality — i.e. “phenomenal noise” that is nothing like a song but more akin to a low, short-lived sound, and yet unimaginably more crude still — then the problem of consciousness, as commonly (mis)conceived, seems to become a lot less confusing.**

Avoiding Confusion Due to Fuzziness

A more specific point of confusion the wave metaphor can help us dissolve is the notion that consciousness is so fuzzy a category that it in fact does not really exist, just like tables and chairs do not really exist. As I have argued elsewhere, I think this is a non sequitur. The fact that the categories of tables and chairs are themselves fuzzy does not imply that the physical properties of the objects to which we refer with these labels are inexact, let alone non-existent. The objects have the physical properties they have regardless of how we label them. Or, to continue the analogy to waves above, and songs in particular: although there is ambiguity about what counts as a song, this does not imply that we cannot speak in precise, factual terms about the properties of a given song — for instance, whether a given song contains a 440 Hz tone.

Similarly, the fact that consciousness, as in “an ordered, composite mind”, is a fuzzy category (after all, what counts as ordered? Do psychotic states? Fleeting dreams?) does not imply that any given phenomenal state we refer to with this term does not have exact and clearly identifiable phenomenal properties — e.g. an experience of the color red or the sensation of fear; properties that exist regardless of how outside observers choose to label them.

And although our labels for categorizing particular phenomenal states themselves tend to be fuzzy to some extent — e.g. which part of the spectrum below counts as red? — this does not imply that we cannot distinguish between different states, nor that we cannot draw any clear boundaries. For instance, we can clearly distinguish between the blue and the red zones respectively on the illustration below despite its gradation.

 

Image result for range of color
A linear representation of the visible light spectrum with wavelengths in nanometers.

 

Just as we can point toward a confined range of wavelengths which induce an experience of (some kind of) red in most people upon hitting their retinas, we can also, in principle, point to a range of physical states that mediate specific phenomenal states. This includes the phenomenal states we call suffering, with the fuzziness of what counts as suffering contained within and near the bounds of this range, while the physical states outside this range, especially those far away, do not mediate suffering, cf. the non-red range in the illustration above.

Thus, by analogy to how we can have precise descriptions of the properties of a song, even as an exact definition of what counts as a song escapes us, there is no reason why we should not be able to speak in factual and precise terms about the phenomenal aspects of a mind and its physical signatures, including the “red range” of wavelengths that comprise phenomenal suffering, metaphorically speaking. And a sophisticated understanding of this notional range is indeed of paramount importance for the project of reducing suffering.


* Note that these seemingly different kinds of waves and theories of consciousness can be identical, since connectome-specific harmonic waves could turn out to be coherent waves in the electromagnetic quantum field, as would seem suggested by a hypothesis known as quantum brain dynamics (I do not necessarily endorse this particular hypothesis).

** Another useful analogy for thinking more clearly about the seemingly crazy notion that “everything is conscious” — or rather: phenomenal — is to think about the question, Is everything light? For in a highly non-standard sense, everything is indeed “light”, in that electromagnetic waves permeate the universe in the form of cosmic background radiation, although everything is not permeated by light in the usual sense of visible electromagnetic radiation (wavelengths around 400–700 nm). We may thus think of consciousness as analogous to visible light (they can also both be more or less intense and have various nuances), and electromagnetic radiation as analogous to phenomenality — the more general phenomenon that encompasses the specific one.

 

“The Physical” and Consciousness: One World Conforming to Different Descriptions

My aim in this essay is to briefly explain a crucial aspect of David Pearce‘s physicalist idealist worldview. In particular, I seek to explain how a view can be both “idealist” and “physicalist”, yet still be a “property monist” view.

Pearce himself describes his view in the following way:

“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions […]

So Pearce’s view is a monist, idealist view: reality is fundamentally experiential. And this reality also conforms to description in physical terms. Pearce is careful, however, to distinguish this view from panpsychism, which Pearce, in contrast to his own idealist view, considers a property dualist view:

“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Panpsychism is best treated as a form of property-dualism.

How, one may wonder, is Pearce’s view different from panpsychism, and from property dualist views more generally? This is something I myself have struggled a lot to understand, and inquired him about repeatedly. And my understanding is the following: according to Pearce, there is only consciousness, and its dynamics conform to physical description. Property dualist views, in contrast, view the world as having two properties: the stuff of the world has insentient physical properties to which separate, experiential properties are somehow attached.

Pearce’s view makes no such division. Instead, on Pearce’s view, description in physical terms merely constitutes a particular (phenomenal) mode of description that (phenomenal) reality conforms to. So to the extent there is a dualism here, it is epistemological, not ontological.

The Many Properties of Your Right Ear

For an analogy that might help explain this point better, consider your right ear. What properties does it have? Setting aside the question concerning its intrinsic nature, it is clear that you can model it in various ways. One way is to touch it with your fingers, whereby you model it via your faculties of tactile sensation (or in neuroanatomical terms: with neurons in your parietal lobe). You may also represent your ear via auditory sensations, for example by hitting it and noticing what kind of sound it makes (a sensation mediated by the temporal lobe). Another way, perhaps the clearest and most practical way for beings like us, is to model it in terms of visual experience: to look at your right ear in the mirror, or perhaps simply imagine it, and thereby have a visual sensation that represents it (mediated by the occipital lobe).

[For most of us, these different forms of modeling are almost impossible to keep separate, as our touching our ears automatically induces a visual model of them as well, and vice versa: a visual model of an ear will often be accompanied by a sense of what it would be like to touch it. Yet one can in fact come a surprisingly long way toward being able to “unbind” these sensations with a bit of practice. This meditation and this one both provide a good exercise in detaching one’s tactile sense of one’s hands from one’s visual model of them. This one goes even further, as it climaxes with a near-total dissolution of our automatic binding of different modes of experience into an ordered whole.]

Now, we may ask: which of these modes of modeling constitute the modeling we call “physical”? And the answer is arguably all of them, as they all relate to the manifestly external (“physical”) world. This is unlike, say, things that are manifestly internal, such as emotions and thoughts, which we do not tend to consider “physical” in this same way, although all our sensations are, of course, equally internal to our mind-brain.

“The physical” is in many ways a poorly defined folk term, and physics itself is not exempt from this ambiguity. For instance, what phenomenal mode does the field of physics draw upon? Well, it is certainly more than just the phenomenology of equations (to the extent this can be considered a separate mode of experience). It also, in close connection with how most of us think about equations, draws heavily on visuospatial modes of experience (I once carefully went through a physics textbook that covered virtually all of undergraduate level physics with the explicit purpose of checking whether it all conformed to such description, and I found that it did). And we can, of course, also describe your right ear in “physics” terms, such as by measuring and representing its temperature, its spatial coordinates, its topology, etc. This would give us even more models of your right ear.

 

The deeper point here is that the same thing can conform to description in different terms, and the existence of such a multitude of valid descriptions does not imply that the thing described itself has a multitude of intrinsic properties. In fact, none of the modes of modeling an ear mentioned above say anything about the intrinsic properties of the ear; they only relate to its reflection, in the broadest sense.

And this is where some people will object: why believe in any intrinsic properties? Indeed, why believe in anything but the physical, “reflective”, (purportedly) non-phenomenal properties described above?

To me, as well as to David Pearce (and Galen Strawson and many others), this latter claim is self-undermining and senseless, like a person reading from a book who claims that the paper of the book they are reading from does not exist, only the text does. All these modes of modeling mentioned above, including all that we deem knowledge of “the physical” are phenomenal. The science we call “physics” is itself, to the extent it is known by anyone, found in consciousness. It is a particular mode of phenomenal modeling of the world, and thus to deny the existence of the phenomenal is also to deny the existence of our knowledge of “physics”.

Indeed, our knowledge of physics and “the physical” attests to this fact as clearly as it attests to anything: consciousness exists. It is a separate question, then, exactly how the varieties of conscious experience relate to descriptions of the world in physical terms, as well as what the intrinsic nature of the stuff of the world is, to the extent it has any. Yet by all appearances, it seems that minds such as our own conform to physical description in terms of what we recognize as brains, and, as with the example of your right ear, such a physical description can take many forms: a visual representation of a mind-brain, what it is like to touch a mind-brain, the number of neurons it has, its temperature, etc.

These are different, yet valid ways of describing aspects of our mind-brains. Yet like the descriptions of different aspects of an ear mentioned above, these “physical” descriptions, while all perfectly valid, still do not tell us anything about the intrinsic nature of the mind-brain. And according to David Pearce, the intrinsic nature of that which we (validly) describe in physical terms as “your brain” is your conscious mind itself. The apparent multitude of aspects of that which we recognize as “brains” and “ears” are just different modes of conscious modeling of an intrinsically monist, i.e. experiential, reality.

 


The view of consciousness explored here may seem counter-intuitive, yet I have argued elsewhere that using waves as a metaphor can help render it less unintuitive, perhaps even positively intuitive.

Blog at WordPress.com.

Up ↑