Effective altruism and common sense

Thomas Sowell once called Milton Friedman “one of those rare thinkers who had both genius and common sense”.

I am not here interested in Sowell’s claim about Friedman, but rather in his insight into the tension between abstract smarts and common sense, and particularly how it applies to the effective altruism (EA) community. For it seems to me that there sometimes is an unbalanced ratio of clever abstractions to common sense in EA discussions.

To be clear, my point is not that abstract ideas are unimportant, or even that everyday common sense should generally be favored over abstract ideas. After all, many of the core ideas of effective altruism are highly abstract in nature, such as impartiality and the importance of numbers, and I believe we are right to stand by these ideas. But my point is that common sense is underutilized as a sanity check that can prevent our abstractions from floating into the clouds. More generally, I seem to observe a tendency to make certain assumptions, and to do a lot of clever analysis and deductions based on those assumptions, but without spending anywhere near as much energy exploring the plausibility of these assumptions themselves.

Below are three examples that I think follow this pattern.

Boltzmann brains

A highly abstract idea that is admittedly intriguing to ponder is that of a Boltzmann brain: a hypothetical conscious brain that arises as the product of random quantum fluctuations. Boltzmann brains are a trivial corollary given certain assumptions: let some basic combinatorial assumptions hold for a set amount of time, and we can conclude that a lot of Boltzmann brains must exist in this span of time (at least as a matter of statistical certainty, similar to how we can derive and be certain of the second law of thermodynamics).

But this does not mean that Boltzmann brains are in fact possible, as the underlying assumptions may well be false. Beyond the obvious possibility that the lifetime of the universe could be too short, it is also conceivable that the combinatorial assumptions that allow a functioning 310 K human brain to emerge in ~ 0 K empty space do not in fact obtain, e.g. because it falsely assumes a combinatorial independence concerning the fluctuations that happen in each neighboring “bit” of the universe (or for some other reason). If any such key assumption is false, it could be that the emergence of a 310 K human brain in ~ 0 K space is not in fact allowed by the laws of physics, even in principle, meaning that even an infinite amount of time would never spontaneously produce a 310 K human Boltzmann brain.

Note that I am not claiming that Boltzmann brains cannot emerge in ~ 0 K space. My claim is simply that there is a big step from abstract assumptions to actual reality, and there is considerable uncertainty about whether the starting assumptions in question can indeed survive that step.

Quantum immortality

Another example is the notion of quantum immortality — not in the sense of merely surviving an attempted quantum suicide for improbably long, but in the sense of literal immortality because a tiny fraction of Everett branches continue to support a conscious survivor indefinitely.

This is a case where I think skeptical common sense and a search for erroneous assumptions is essential. Specifically, even granting a picture in which, say, a victim of a serious accident survives for a markedly longer time in one branch than in another, there are still strong reasons to doubt that there will be any branches in which the victim will survive for long. Specifically, we have good reason to believe that the measure of branches in which the victim survives will converge rapidly toward zero.

An objection might be that the measure indeed will converge toward zero, but that it never actually reaches zero, and hence there will in fact always be a tiny fraction of branches in which the victim survives. Yet I believe this rests on a false assumption. Our understanding of physics suggests that there is only — and could only be — a finite number of distinct branches, meaning that even if the measure of branches in which the victim survives is approximated well by a continuous function that never exactly reaches zero, the critical threshold that corresponds to a zero measure of actual branches with a surviving victim will in fact be reached, and probably rather quickly.

Of course, one may argue that we should still assign some probability to quantum immortality being possible, and that this possibility is still highly relevant in expectation. But I think there are many risks that are much less Pascallian and far more worthy of our attention.

Intelligence explosion

Unlike the two previous examples, this last example has become quite an influential idea in EA: the notion of a fast and local “intelligence explosion“.

I will not here restate my lengthy critiques of the plausibility of this notion (or the critiques advanced by others). And to be clear, I do not think the effective altruism community is at all wrong to have a strong focus on AI. But the mistake I think I do see is that there are many abstractly grounded assumptions pertaining to a hypothetical intelligence explosion that have received an insufficient amount of scrutiny from common sense and empirical data (Garfinkel, 2018 argues along similar lines).

I think part of the problem stems from the fact that Nick Bostrom’s book Superintelligence framed the future of AI in a certain way. Here, for instance, is how Bostrom frames the issue in the conclusion of his book (p. 319):

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. … Some little idiot is bound to press the ignite button just to see what happens.

I realize Bostrom is employing a metaphor here, and I realize that he assigns a substantial credence to many different future scenarios. But the way his book is framed is nonetheless mostly in terms of such a metaphorical bomb that could ignite an intelligence explosion (i.e. FOOM). And it seems that this kind of scenario in effect became the standard scenario many people assumed and worked on, with comparatively little effort going into the more fundamental question of how plausible this future scenario is in the first place. An abstract argument about (a rather vague notion of) “intelligence” recursively improving itself was given much weight, and much clever analysis focusing on this FOOM picture and its canonical problems followed.

Again, my claim here is not that this picture is wrong or implausible, but rather that the more fundamental questions about the nature and future of “intelligence” should be kept more alive, and that our approach to these questions should be more informed by empirical data, lest we misprioritize our resources.

In sum, our fondness for abstractions is plausibly a bias we need to control for. We can do this by applying common-sense heuristics to a greater extent, by spending more time considering how our abstract models might be wrong, and by making a greater effort to hold our assumptions up against empirical reality.

Consciousness – Orthogonal or Crucial?

The following is an excerpt from my book Reflections on Intelligence (2016/2020).


A question often considered open, sometimes even irrelevant, when it comes to “AGIs” and “superintelligences” is whether such entities would be conscious. Here is Nick Bostrom expressing such a sentiment:

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

(Bostrom, 2012, “Definition of ‘superintelligence’”)

This is false, however. On no meaningful definition of “more capable than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills” can the question of consciousness be considered irrelevant. This is like defining a “superintelligence” as an entity “smarter” than any human, and to then claim that this definition leaves open whether such an entity can read natural language or perform mathematical calculations. Consciousness is integral to virtually everything we do and excel at, and thus if an entity is not conscious, it cannot possibly outperform the best humans “in practically every field”. Especially not in “scientific creativity, general wisdom, and social skills”. Let us look at these three in turn.

Social Skills

Good social skills depend on an ability to understand others. And in order to understand other people, we have to simulate what it is like to be them. Fortunately, this comes quite naturally to most of us. We know what it is like to consciously experience emotions such as sadness, fear, and joy directly, and this enables us to understand where people are coming from when they report and act on these emotions.

Consider the following example: without knowing anything about a stranger you observe on the street, you can roughly know how that person would feel and react if they suddenly, by the snap of a finger, had no clothes on right there on the street. Embarrassment, distress, wanting to cover up and get away from the situation are almost certain to be the reaction of any randomly selected person. We know this, not because we have read about it, but because of our immediate simulations of the minds of others – one of the main things our big brains evolved to do. This is what enables us to understand the minds of other people, and hence without running this conscious simulation of the minds of others, one will have no chance of gaining good social skills and interpersonal understanding.

But couldn’t a computer just simulate people’s brains and then understand them without being conscious? Is the consciousness bit really relevant here?

Yes, consciousness is relevant. At the very least, it is relevant for us. Consider, for instance, the job of a therapist, or indeed the “job” of any person who attempts to listen to another person in a deep conversation. When we tell someone about our own state or situation, it matters deeply to us that the listener actually understands what we are saying. A listener who merely pretends to feel and understand would be no good. Indeed, this would be worse than no good, as such a “listener” would then essentially be lying and deceiving in a most insensitive way, in every sense of the word.

Frustrated Human: “Do you actually know the feeling I’m talking about here? Do you even know the difference between joy and hopeless despair?”

Unconscious liar: “Yes.”

Whether someone is actually feeling us when we tell them something matters to us, especially when it comes to our willingness to share our perspectives, and hence it matters for “social skills”. An unconscious entity cannot have better social skills than “the best human brains” because it would lack the very essence of social skills: truly feeling and understanding others. Without a conscious mind there is no way to understand what it is like to have such a mind.

General Wisdom

Given how relevant social skills are for general wisdom, and given the relevance of consciousness for social skills, the claim that consciousness is irrelevant to general wisdom should already stand in serious doubt at this point.

Yet rather than restricting our focus to “general wisdom”, let us consider ethics in its entirety, which, broadly construed at least, includes any relevant sense of “general wisdom”. For in order to reason about ethics, one must be able to consider and evaluate questions like the following:

Can certain forms of suffering be outweighed by a certain amount of happiness?

Does the nature of the experience of suffering in some sense demand that reducing suffering is given greater moral priority than increasing happiness (for the already happy)?

Can realist normative claims be made on the basis of the properties of such experiences?

One has to be conscious to answer such questions. That is, one must know what such experiences are like in order to understand their experiential properties and significance. Knowing what terms like “suffering” and “happiness” refer to – i.e. knowing what the actual experiences of suffering and happiness are like – is as crucial to ethics as numbers are to mathematics.

The same point holds true about other areas of philosophy that bear on wisdom, such as the philosophy of mind: without knowing what it is like to have a conscious mind, one cannot contribute to the discussion about what it is like to have one and what the nature of consciousness is. Indeed, an unconscious entity has no idea about what the issue is even about in the first place.

So both in ethics and in the philosophy of mind, an unconscious entity would be less than clueless about the deep questions at hand. If an entity not only fails to surpass humans in this area, but fails to even have the slightest clue about what we are talking about, it hardly surpasses the best human brains in practically every field. After all, these questions are also relevant to many other fields, ranging from questions in psychology to questions concerning the core foundations of knowledge.

Experiencing and reasoning about consciousness is a most essential part of “human abilities”, and hence an entity that cannot do this cannot be claimed to surpass humans in the most important, much less all, human abilities.

Scientific Creativity

The third and final ability mentioned above that an unconscious entity can supposedly surpass humans in is scientific creativity. Yet scientific creativity must relate to all fields of knowledge, including the science of the conscious mind itself. This is also a part of the natural world, and a most relevant one at that.

Experiencing and accurately reporting what a given state of consciousness is like is essential for the science of mind, yet an unconscious entity obviously cannot do such a thing, as there is no experience it can report from. It cannot display any scientific creativity, or even produce mere observations, in this most important science. Again, the most it can do is produce lies – the very anti-matter of science.


Compassionate Free Speech

Two loose currents appear to be in opposition in today’s culture. One is animated by a strong insistence on empathy and compassion as core values, the other by a strong insistence on free speech as a core value. These two currents are often portrayed as though they must necessarily be in conflict. I think this is a mistake.

To be sure, the two values described above can be in tension, and none of them strictly imply the other. But it is possible to reconcile them in a refined and elegant synthesis. That, I submit, is what we should be aiming for. A synthesis of two vital and mutually reinforcing values.

Definitions and outline

It is crucial to distinguish 1) social and ethical norms, and 2) state-enforced laws. The argument I make here pertains to the first level. That is, I am arguing that we should aim to observe and promote ethical norms of compassion and open conversation, respectively.

What do I mean by these terms? Compassion is commonly defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. I here use the term in a broader sense that also covers related virtues such as understanding, charitable interpretation, and kindness.

By norms of open conversation, or free expression, I mean norms that enable people to express their honest views openly, even when these views are controversial and uncomfortable. These norms do not entail that speech should be wholly unrestricted; after all, virtually everyone agrees that defamation and incitements to commit severe crimes should be illegal, as they commonly are.

My view is that we should roughly think of these two broad values as prima facie duties: we should generally strive to observe norms of compassion and open conversation, except in (rare) cases where other duties or virtues override these norms.

Below is a short defense of these two respective values, highlighting their importance in their own right. This is followed by a case that these values are not only compatible, but indeed strongly complementary. Finally, I explore what I see as some of the causes of our current state of polarization, and suggest five heuristics that might be useful going forward.

Brief defenses

Free speech

There are many strong arguments in favor of free speech. A famous collection of such arguments is On Liberty (1859) by John Stuart Mill, whose case for free speech is primarily based on the harm principle: the only reason power can legitimately be exercised over any individual against their will is to prevent harm to others.

This principle is intuitively compelling, although it leaves it unspecified what exactly counts as a harm to others. That is perhaps the main crux in discussions about free speech, and this alone could provide an argument in favor of free and open expression. For how can we clarify what should count as sufficient harm to others to justify the exercise of power if not through open discussion?

A necessary corrective to biased, fallible minds

Another important argument Mill makes in favor of free speech is based not merely on the rights of the speaker, but in equal part on the rights of the would-be listeners, who are also robbed by the suppression of free expression:

[T]he peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.

In essence, Mill argues that, contrary to the annoyance we may instinctively feel, we should in fact be grateful for having our cherished views challenged, not least because it can help clarify and update our views.

Today, Mill’s argument can be further bolstered by a host of well-documented psychological biases. We now know that we are all vulnerable to confirmation bias, the bandwagon effect, groupthink, etc. These biases make it all too easy for us to deceive ourselves into thinking that we already possess the whole truth, although we most certainly do not. Consequently, if we want to hold reasonable beliefs, we should welcome and appreciate those who challenge the pitfalls of our groupish minds — pitfalls that we may otherwise be content to embrace.

After all, how can we know that our attempts to protect ourselves from hearing views that we dislike are not essentially unconscious attempts to protect our own confirmation bias? Free and open conversation is our best debiasing tool.

Strategic reasons

An altogether different argument in favor of honoring principles of free speech is that a failure to do so is strategically unwise. Indeed, as free-speech defender Noam Chomsky argues, there are several reasons to consider the suppression of free speech a tactical error if we are trying to create a better society.

First, reinforcing a norm of suppressing speech can have the unintended consequence of leading all sides, and perhaps eventually governments, to consider it increasingly legitimate to suppress certain forms of speech. “If they can suppress speech, why shouldn’t we?” The effects of such a regression would be worst for those who lack power.

Second, seeking to suppress speech is likely to backfire and to strengthen the other side, by making that side look more appealing than it in fact is — the suppressed becomes alluring — and by making the side that seeks to suppress speech look unreasonable, as though they are unable to muster a defense of their views.

When people try to make us do something, we tend to react negatively and to distance ourselves, even if we agreed with them from the outset (cf. psychological reactance). This is another strong reason against suppressing free expression, and against giving people the impression that they are not allowed to discuss or think certain things. It is human nature to react by asserting one’s freedom in defiance, even if it means voting for a president that one would otherwise have voted against. (See also Cialdini, 1984/2021, ch. 7.)

(Weak norms of free expression are thus a democratic problem in more than one way: it can keep citizens from voting in accordance with their ideal preferences both by making them ill-informed and by provoking votes of defiance.)

Steven Pinker has made a related point: if we place certain issues beyond the bounds of acceptable discourse, many people are likely to seek out discussion of these issues from unsavory sources, which can in turn put people on a path toward extreme and uncompassionate views. This parallels one of the main arguments made against the prevailing drug laws of today: such restrictions merely push the whole business into an underground market where people get dangerously polluted goods.

As Ayishat Akanbi eloquently put it (paraphrased slightly): if we suppress ideas, they will “operate with insidious undertones”, and we in effect “push people into the arms of extremism.”


I will let my defense of compassion be even briefer still, as I have already made an elaborate defense of it in my book Suffering-Focused Ethics: Defense and Implications.

The short case is this: Suffering, especially the most intense suffering, truly matters. It is truly bad and truly worth preventing. Consequently, a desire to alleviate intense suffering is simply the most sensible response. Only a failure to connect with the reality of suffering can leave us apathetic. That is the simplest and foremost reason why compassion is of paramount importance.

Another reason to be compassionate, including in the broader sense of being kind and understanding, is that such an attitude has great instrumental benefits at the level of our communication and relations: it fosters trust and cooperation, which in turn enables win-win interactions.

However, to say that we should be compassionate is not to say that we should be game-theoretically naive in the sense of kindly allowing others to walk all over us. Compassion is wholly compatible with, and indeed mandates, tit for tat and assertiveness in the face of transgressions.

Lastly, it is worth emphasizing that compassion and empathy are not partisan values. Empathy is a human universal, and compassion has been considered a virtue in all major world traditions, as well as in most political movements, including political conservatism. Indeed, people of all political orientations score high on the harm/care dimension in Jonathan Haidt’s moral foundations framework. It really is a value on which people show uniquely wide agreement, at least on reflection. When they are not on Twitter.

Compassion and free speech: Complementary values

As noted above, the two values I defend here do not strictly imply each other, at least in some purely theoretical sense. But they are strongly complementary in many regards.

How free speech aids compassion

Compassion and the compassionate project can be aided by free speech in various ways. For example, to alleviate and prevent suffering effectively with our limited resources, we need to be able to discuss controversial ideas. We need to be able to discuss and measure different values and priorities against each other, including values that many people consider sacred and hence offensive to discuss.

As a case in point, in my book Suffering-Focused Ethics, I defend the moral primacy of reducing extreme suffering, even above other values that many people may consider sacred, and I further discuss the difficult question of which causes we should prioritize so as to best reduce extreme suffering. My arguments will no doubt be deeply offensive and infuriating to many, and I believe a substantial number of people would like to see my ideas suppressed if they could. This is not, of course, unique to my views: all treatises and positions on ethics are bound to be deemed too offensive and too dangerous by some.

This highlights the importance of free speech for ethics in general, and for the project of reducing suffering in particular. To conduct this most difficult conversation about what matters and what our priorities should be, we need a culture that allows, indeed promotes, this conversation — not a culture that stifles it. People who want to reduce suffering should thus have a strong interest in preserving and advancing free speech norms.

Another way in which free speech aids compassion is that, put simply, encouraging the free expression of, and listening to, each others’ underlying grievances can help us build mutual understanding, and in turn enable us to address our problems in cooperative ways. As Noam Chomsky notes in the context of hateful ideologies:

If you have a festering sore, the cure is not to irritate it, but to find out what its roots are and where it comes from, and to deal with those. Racist and other such speech is a festering sore. By silencing it, you simply amplify its appeal, and even lend it a veneer of respectability, as in fact we’ve seen very clearly in the last couple of years. And what has to be done, plainly, is to confront it, and to ask where it comes from, and to try to deal with the roots of such ideas. That’s the way to extirpate the ugliness and evil that lies behind such phenomena.

Human rights activist Deeyah Khan similarly argues that a root source of white supremacy is often a sense of fear and lack of opportunity, not inherent evil or apathy. She contends that the best solution to extremist ideologues is generally to engage in conversation and to seek to understand, not to shut down the conversation. (I recommend watching Khan’s documentary White Right: Meeting the Enemy.)

So while compassion per se does not directly imply free speech at some purely theoretical level, I would argue that a sophisticated and fully extrapolated version of compassion and the compassionate project does necessitate strong norms of free and open expression at the practical level.

How compassion aids free speech

One of the ways in which compassion can aid open conversation is exemplified in Deeyah Khan’s documentary mentioned above: she sits down and listens to white nationalists, seeking to understand them with compassion, which allows them to identify and express their own underlying issues, such as feelings of fear, vulnerability, and unworthiness. Such things can be difficult to share in apathetic and antagonistic environments, be they the macho ingroup or the angry outgroup. “Fuck you, racist” does not quite invite a response of “I’m afraid and hurting” as much as does, “How are you feeling, and what really motivates you?” On the contrary, it probably just serves to reinforce the facade of the pain.

We may not usually think of conditions that further the sharing of our underlying worries and vulnerabilities as a matter of free speech, perhaps because we all help perpetuate norms that suppress honesty about these things. But if free speech norms are essentially about enabling us to dare express our honest perspectives, then our de facto suppression of our inmost worries and vulnerabilities is indeed a free speech issue — and a rather consequential one at that (as I think Khan’s White Right makes clear). Compassion may well be the best remedy we have to our truth-subduing culture of suppressing our core worries and vulnerabilities.

A related way in which compassion, specifically the virtue of charitable interpretation, is important for free speech is, quite simply, that we suffocate free speech in its absence. If people hold back from expressing a nuanced view because they know that they will be strawmanned and vilely attacked based on bad-faith misinterpretations, then the state of free expression will be poor indeed.

In contrast, free expression will flourish when we do the opposite: when everyone engages with the strongest version of their opponents’ view — i.e. steelmans it — so that people feel positively motivated to present nuanced views and arguments in the expectation of being critiqued in good faith.

That, needless to say, is far from the state we are currently in.

Why we fail so spectacularly today

We are currently witnessing a primitive tribal dynamic exacerbated by the fact that we inhabit a treacherous environment to which we are not yet adapted, neither biologically nor culturally. I am speaking, of course, of the environment of screen-to-screen interaction.

Yet we should be clear that values and politics were never easy spheres to navigate in the first place. They have always been minefields. Politics is a notorious mind-killer for deep evolutionary reasons, and our political behavior is often more about signaling our group affiliations than it is about creating good policies. This is true not just of the “other side”; it is true of all of us, though we remain largely unaware of and self-deceived about it.

Thus, our predicament is that most of us care deeply about loyalty signaling, and such signaling has now become dangerously inflated. Moreover, we often use beliefs, ostensibly all about tracking reality, as ornaments that signal our group loyalty.

A hostage crisis instilling false assumptions

The two loose social currents I mentioned in the introduction can, I submit, be understood in this light. Specifically, values centered on empathy and compassion have become an ornament of sorts that signals loyalty to one side, while values centered on free speech have become a loyalty signal to another side. To be clear, I am not saying that these values are merely ornaments; they clearly are not. A value can be an ornament displayed with pride and be sincerely held at the same time. Yet our natural inclination to signal group loyalty can lead us to only express our support for one of these values, and to underemphasize the “opposing” value, even if we in fact do favor it.

In this way, the values of compassion and free speech have to some extent become hostages in a primitive tribal game, which in turn gives the false impression that there must be some deep conflict between these values, and that people must choose one or the other, as opposed to these values being, as I have argued, strongly complementary (with occasional and comparatively minor tensions).

The upshot is that supporters of free speech may feel nudged to display insensitivity in order to signal their loyalty and consistency, while supporters of anti-discrimination may feel nudged to oppose free speech.

Uncharitable claims beyond belief

A sad feature of this dynamic, and something that helps fuel it further, is how incredibly uncharitable the outer flanks of these two tribal currents are toward the other side.

“The PC-policing SJWs don’t care about the hard facts and just want to suppress them.”

“The free speech bros don’t care about minorities and just want to oppress them.”

To say that people are failing to steelman here would be quite the understatement. Indeed, this barely even qualifies as a strawman. It is more like the scream-man version of the other side: the worst, most scary version of the other side’s position that one could come up with. And this scream-man is repeatedly rehearsed in the partitioned echo halls of Twitter to the extent that people start believing these preposterously uncharitable narratives about the Scary Other.

It is a tragedy of the commons phenomenon: people are gleefully rewarded in their ingroup each time they promulgate the scream-man representation of the other side, and so it feels right to do so for individuals in these respective groups. But in the bigger picture, it just leaves everyone much worse off.

Distributions and the importance of self-criticism

To be sure, there are serious problems with significant numbers of people who conform too closely to the cartoon descriptions above. But a crucial point is that we must think in terms of statistical distributions. Specifically, the most loud-mouthed and scary two percent of the “other side” — a minority that tends to get a disproportionate amount of attention — should not be taken to represent everyone on that “side”, let alone its most reasonable representatives.

That being said, it is also true that many people on both sides tend to be bad at criticizing the harmful tendencies of their own “team”. There does indeed appear to be a tendency among certain defenders of free speech to fail to criticize and condemn those who discriminate against minorities. Likewise, there really does seem to be a tendency among certain progressives to fail to criticize and condemn those who suppress discussions of contentious issues.

This failure to speak out against the worst elements of one’s “own side”, side A, with sufficient force can create the impression, on side B, that most people on side A actually agree with these worst elements. That is how damning it is that we fail to criticize the transparent excesses of our ingroup in clear terms.

At cross-purposes

A problem with our failure to be charitable and to think in terms of distributions is that people end up talking past each other: both sides tend to criticize a strawman version of the other side based on the rabid tail-end elements of that side, which most people on the other side really do disagree with (although they may, as mentioned above, fail to express this disagreement with sufficient clarity).

This frequently results in debates with two sides that are in large part talking at cross-purposes: one side mostly defends free speech, while the other side mostly defends anti-discrimination, as though these were necessarily in great conflict. The failure to explore the compatibility and mutual complementarity of these values is striking.

The perils of screen-to-screen interaction

As noted above, our current mode of interaction only aggravates our political imbecility. When engaged in face-to-face interaction, we naturally relate to and empathize with the person before us, and we have a strong interest in keeping our interaction cordial so as to prevent it from escalating into conflict.

In screen-to-screen interaction, by contrast, our circuits for interpersonal interaction are all but inert, as we find ourselves shielded off from salient feedback and danger. Social media is road rage writ large. It is a road rage that renders it extra difficult to be charitable, and which renders it far more tempting to paint the outgroup in a bad light than it could ever be in a face-to-face environment, where preposterous strawmen would be called out and challenged in real time.

As a study on political polarization on Twitter put it:

Many messages contain sentiments more extreme than you would expect to encounter in face-to-face interactions, and the content is frequently disparaging of the identities and views associated with users across the partisan divide.

How can we reduce these unfortunate tendencies? The age of social media calls for new norms.

Better norms for screen communication

Human culture has adapted to technological changes before, and it seems that we have no choice but to do the same today, in the face of our current state of cultural maladaptation. The following are five heuristics, or norms, that I think are likely to be useful in this regard.

1. The face-to-face heuristic

In light of the above, it seems sensible to adopt the precept of communicating online in roughly the same way we would communicate face-to-face. Our skills in face-to-face interaction have deep biological and cultural bases, and hence this heuristic is a cheap way to tap into a well-honed toolbox for functional human communication.

One effect of employing this heuristic will likely be a reduction of sarcastic and taunting comments. Such comments are rarely useful for taking our conversations to the next level, as we tend to realize face-to-face.

2. The nuance heuristic

As I argue in my defense of nuance, much of the tension that we see today could likely be lessened greatly if we adopted more nuanced perspectives, such as by acknowledging grains of truth in different viewpoints, and by representing beliefs in terms of graded credences rather than posturing with overconfident all-or-nothing certainties.

3. The steelman heuristic

I have already mentioned this, but it can hardly be stressed enough: we must strive to be charitable and to steelman the views of our opponents, especially since our road-rage-behind-the-screen predicament makes it easier than ever to do the opposite.

Whenever we summarize and criticize the views of the other side, we should stop and ask ourselves: Is this really the most honest statement of their views that I can muster, let alone the strongest one? If I think their view is painfully stupid, do I really fully understand it? Do I really know what it entails and the best arguments that support it?

4. Compassion for the outgroup

As noted above, compassion really is a consensus value, if ever there were one. The disagreement mostly arises when it comes to which individuals we should extend our compassion to. Both of the notional “sides”, or social currents, described here suffer from selective compassion: they generally fail to show sufficient compassion and respect for the other side, which renders productive conversation difficult. And this point needs to be stressed with unique fervor today, as screens are an all too powerful and insidious catalyst for outgroup apathy.

5. Criticizing the ingroup

Condemning the excesses of one’s (vaguely associated) ingroup is also uniquely important today. Why? Because we now see large numbers of people behaving badly on social media, and our intuitions are statistically illiterate: we do not intuitively understand how a faction endorsing a certain view or behavior can simultaneously be large in number and constitute but a small minority of a given group. The world is big, and we mostly do not understand that (at an intuitive level).

Only by publicly countering the excesses of our “ingroup” can we make it clear to the other side — and perhaps also to our own side — that the extremists truly are a disapproved minority. Such ingroup criticism seems paramount if we are to mitigate the ongoing trend of polarization.


We have created a polarized online society in which people can feel pushed toward a needlessly narrow set of values — compassion or free speech, choose one! We are pushed in this way, not by totalitarian laws, but by modes of communication to which we are not yet adapted, and which we are navigating with patently defunct norms.

Norms are often more important than laws. After all, most of us can think of judgments from our peers that would be worse than a minor prison sentence. Hence, totalitarian laws are not required for free expression to be stifled into a de facto draconian state. The notion that harshly punitive norms do not restrict speech in costly ways is naive.

Sure, we should be free to judge others based on the things they say. But just how harshly should we judge people for discussing controversial views? And do we understand the risks and the strategic costs associated with such judgments, let alone the risks of trying to suppress certain viewpoints? If we place ourselves in opposition to free speech, and then give people the ultimatum of siding either with “us” or with “them”, a lot of people are going to choose the other side, even if that side has features they find genuinely worrying.

I have tried to argue that the choice between free speech or compassion is a false one. It really is possible to chart a balanced middle path of a free and compassionate society.

Physics Is Also Qualia

In this post, I seek to clarify what I consider to be some common confusions about consciousness and “physics” stemming from a failure to distinguish clearly between ontological and epistemological senses of “physics”.

Clarifying Terms

Two senses of the word “physics” are worth distinguishing. There is physics in an ontological sense: roughly speaking, the spatio-temporal(-seeming) world that in many ways conforms well to our best physical theories. And then there is physics in an epistemological sense: a certain class of models we have of this world, the science of physics.

“Physics” in this latter, epistemological sense can be further divided into 1) the physical models we have in our minds, versus 2) the models we have external to our minds, such as in our physics textbooks and computer simulations. Yet it is worth noting that, to the extent we ourselves have any knowledge of the models in our books and simulations, we only have this knowledge by representing it in our minds. Thus, ultimately, all the knowledge of physical models we have, as subjects, is knowledge of the first kind: as appearances in our minds.*

In light of these very different senses of the term “physics”, it is clear that the claim that “physics is also qualia” can be understood in two very different ways: 1) in the sense that the physical world, in the ontological sense, is qualia, or “phenomenal”, and 2) that our models of physics are qualia, i.e. that our models of physics are certain patterns of consciousness. The first of these two claims is surely the most controversial one, and I shall not defend it here; I explore it here and here.

Instead, I shall here focus on the latter claim. My aim is not really to defend it, as I already briefly did that above: all the knowledge of physics we have, as subjects, ultimately appears as experiential patterns in our minds. (Although talk of the phenomenology of, say, operations in Hilbert spaces admittedly is rare.) I take this to be obvious, and hit an impasse with anyone who disagrees. My aim here is rather to clarify some confusions that arise due to a lack of clarity about this, and due to conflations of the two senses of “physics” described above.

The Problem of Reduction: Epistemological or Ontological?

I find it worth quoting the following excerpt from a Big Think interview with Sam Harris. Not because there is anything atypical about what Harris says, but rather because I think he here clearly illustrates the prevailing lack of clarity about the distinction between epistemology and ontology in relation to “the physical”.

If there’s an experiential internal qualitative dimension to any physical system then that is consciousness. And we can’t reduce the experiential side to talk of information processing and neurotransmitters and states of the brain […]. Someone like Francis Crick said famously you’re nothing but a pack of neurons. And that misses the fact that half of the reality we’re talking about is the qualitative experiential side. So when you’re trying to study human consciousness, for instance, by looking at states of the brain, all you can do is correlate experiential changes with changes in brain states. But no matter how tight these correlations become that never gives you license to throw out the first person experiential side. That would be analogous to saying that if you just flipped a coin long enough you would realize it had only one side. And now it’s true you can be committed to talking about just one side. You can say that heads being up is just a case of tails being down. But that doesn’t actually reduce one side of reality to the other.

Especially worth resting on here is the statement “half of the reality we’re talking about is the qualitative experiential side.” Yet is this “half of reality” an “ontological half” or an “epistemological half”? That is, is there a half of reality out there that is part phenomenal, and part “non-phenomenal” — perhaps “inertly physical”? Or are we rather talking about two different phenomenal descriptions of the same thing, respectively 1) physico-mathematical models of the mind-brain (and these models, again, are also qualia, i.e. patterns of consciousness), and 2) all other phenomenal descriptions, i.e. those drawing on the countless other experiential modalities we can currently conceive of — emotions, sounds, colors, etc. — as well as those we can’t? I suggest we are really talking about two different descriptions of the same thing.

A similar question can be raised in relation to Harris’ claim that we cannot “reduce one side of reality to the other.” Is the reduction in question, or rather failure of reduction, an ontological or an epistemological one? If it is ontological, then it is unclear what this means. Is it that one side of reality cannot “be” the other? This does not appear to be Harris’ view, even if he does tacitly buy into ontologically distinct sides (as opposed to descriptions) of reality in the first place.

Yet if the failure of reduction is epistemological, then there is in fact little unusual about it, as failures of epistemological reduction, or reductions from one model to another, are found everywhere in science. In the abstract sciences, for example, one axiomatic system does not necessarily reduce to another; indeed, we can readily create different axiomatic systems that not only fail to reduce to each other yet which actively contradict each other. And hence we cannot derive all of mathematics, broadly construed, from a single axiomatic system.

Similarly, in the empirical sciences, economics does not “reduce to” quantum physics. One may object that economics does reduce to quantum physics in principle, yet it should then be noted that 1) the term “in principle” does an enormous amount of work here, arguably about as much as it would have to do in the claim that “quantum physics can explain consciousness in principle” — after all, physics and economics invoke very different models and experiential modalities (economic theories are often qualitative in nature, and some prominent economists have even argued they are primarily so). And 2) a serious case can be made against the claim that even all the basic laws found in chemistry, the closest neighbor of physics, can be derived from fundamental physical theories, even in principle (see e.g. Berofsky, 2012, chap. 8). This case does not rest on there being something mysterious going on between our transition from theories of physics to theories of chemistry, nor that new fundamental forces are implicated, but merely that our models in these respective fields contain elements not reducible, even in principle, to our models in other areas.

Thus, at the level of our minds, we can clearly construct many different mental models which we cannot reduce to each other, even in principle. Yet this merely says something about our models and epistemology. It hardly comprises a deep metaphysical mystery.

Denying the Reality of Consciousness

The fact that the world conforms, at least roughly, to description in “physical” terms seems to have led some people to deny that consciousness in general exists. Yet this, I submit, is a fallacy: the fact that we can model the world in one set of terms which describe certain of its properties does not imply that we cannot describe it in another set of terms that describe other properties truly there as well, even if we cannot derive one from the other.

By analogy, consider again physics and economics: we can take the exact same object of study — say, a human society — and describe aspects of it in physical terms (with models of thermodynamics, classical mechanics, electrodynamics, etc.), yet we cannot from any such description or set of descriptions meaningfully derive a description of the economics of this society. It would clearly be a fallacy to suggest that this implies facts of economics cannot exist.

Again, I think the confusion derives from conflating epistemology with ontology: “physics”, in the epistemological sense of “descriptions of the world in physico-mathematical terms”, appears to encompass “everything out there”, and hence, the reasoning goes, nothing else can exist out there. Of course, in one sense, this is true: if a description in physico-mathematical terms exhaustively describes everything out there, then there is indeed nothing more to be said about it — in physico-mathematical terms. Yet this says nothing about the properties of what is out there in other terms, as illustrated by the economics example above. (Another reason some people seem to deny the reality of consciousness, distinct from conflation of the epistemological and the ontological, is “denial due to fuzziness”, which I have addressed here.)

This relates, I think, to the fundamental Kantian insight on epistemology: we never experience the world “out there” directly, only our own models of it. And the fact that our physical model of the world — including, say, a physical model of the mind-brain of one’s best friend — does not entail other phenomenal modalities, such as emotions, by no means implies that the real, ontological object out there which our physical model reflects, such as our friend’s actual mind-brain, does not instantiate these things. That would be to confuse the map with the territory. (Our emotional model of our best friend does, of course, entail emotions, and it would be just as much of a fallacy to say that, since such emotional models say nothing about brains in physical terms, descriptions of the latter kind have no validity.)

Denials of this sort can have serious ethical consequences, not least since the most relevant aspects of consciousness, including suffering, fall outside descriptions of the world in purely physical terms. Thus, if we insist that only such physico-mathematical descriptions truly describe the world, we seem forced to conclude that suffering, along with everything else that plausibly has moral significance, does not truly exist. Which, in turn, can keep us from working toward a sophisticated understanding of these things, and from creating a better world accordingly.


* And for this reason, the answer to the question “how do you know you are conscious?” will ultimately be the same as the answer to the question “how do you know physics (i.e. physical models) exist?” — we experience these facts directly.

In Defense of Nuance

The world is complex. Yet most of our popular stories and ideologies tend not to reflect this complexity. Which is to say that our stories and ideologies, and by extension we, tend to have insufficiently nuanced perspectives on the world.

Indeed, falling into a simple narrative through which we can easily categorize and make sense of the world — e.g. “it’s all God’s will”; “it’s all class struggle”; “it’s all the muslims’ fault”; “it’s all a matter of interwoven forms of oppression” — is a natural and extremely powerful human temptation. And something social constructivists get very right is that this narrative, the lens through which we see the world, influences our experience of the world to an extent that is difficult to appreciate.

So much more important, then, that we suspend our urge to embrace simplistic narratives to (mis)understand the world through. In order to navigate wisely in the world, we need to have views that reflect its true complexity — not views that merely satisfy our need for simplicity (and social signaling; more on this below). For although simplicity can be efficient, and to some extent is necessary, it can also, when too much too relevant detail is left out, be terribly costly. And relative to the needs of our time, I think most of us naturally err on the side of being expensively unnuanced, painting a picture of the world with far too few colors.

Thus, the straightforward remedy I shall propose and argue for here is that we need to control for this. We need to make a conscious effort to gain more nuanced perspectives. This is necessary as a general matter, I believe, if we are to be balanced and well-considered individuals who steer clear of self-imposed delusions. Yet it is also necessary for our time in particular. More specifically, it is essential in addressing the crisis that human conversation seems to be facing in the Western world today — a crisis that largely seems the result of an insufficient amount of nuance in our perspectives.

Some Remarks on Human Nature

There are certain facts about the human condition that we need to put on the table and contend with. These are facts about our limits and fallibility that should give us all pause about what we think we know — both about the world in general and about ourselves in particular.

For one, we have a whole host of well-documented cognitive biases. There are far too many for me to list them all here, yet some of the most important ones are: confirmation bias (the tendency of our minds to search for, interpret, and recall information that confirms our pre-existing beliefs); wishful thinking (our tendency to believe what we wish were true); and overconfidence bias (our tendency to have excessive confidence in our own beliefs — in one study, people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time). And while we can probably all recognize these pitfalls in other people, it is much more difficult to realize and admit that they afflict ourselves as well. In fact, our reluctance to realize this is itself a well-documented bias, known as the bias blindspot.

Beyond acknowledging that we have fallible minds, it is also helpful to understand the deeper context that has given rise to much of this fallibility, and which continues to fuel it, namely our social context — both the social context of our evolutionary history as well as that of our present condition. We humans are deeply social creatures, which shows at every level of our design, including the level of our belief formation. And we need to be acutely aware of this if we are to form reasonable beliefs with minimal amounts of self-deception.

Yet not only are we social creatures, we are also, by nature, deeply tribal creatures. As psychologist Henri Tajfel showed, one need only assign one group of randomly selected people the letter “A” and another randomly selected group the letter “B” in order for a surprisingly strong in-group favoritism to emerge. This method for studying human behavior is known as the minimal group paradigm, and it shows something about us that history should already have taught us a long time ago: that human tribalism is like gasoline just waiting for a little spark to be ignited.

Our social and tribal nature has implications for how we act and what we believe. It is, for instance, largely what explains the phenomenon of groupthink, which is when our natural tendency toward (in-)group conformity leads to a lack of dissenting viewpoints among individuals in a given group, which in turn leads to poor decisions by these individuals.

Indeed, our beliefs about the world are far more socially influenced than we tend to realize, not just in that we get our views from others around us, but also in that we often believe things in order to signal to others that we possess certain desirable traits, or to signal that we are loyal to them. This latter way of thinking about our beliefs is quite at odds with how we prefer to think about ourselves, yet the evidence for this unflattering view is difficult to deny at this point.

As authors Robin Hanson and Kevin Simler argue in their recent book The Elephant in the Brain, we humans are strategically self-deceived about our own motives, including when it comes to what motivates our beliefs. Beliefs, they argue, serve more functions than just the function of keeping track of what is true of the world. For while beliefs surely do serve this practical function, they also often serve a very different, very social function, namely to show others what kind of person we are and what kind of groups we identify with. This makes beliefs much like clothes, which have the practical function of keeping us warm while, for most of us, also serving the function of signaling our tastes and group affiliations. One of Hanson and Simler’s main points is that we are not consciously aware that our beliefs have these distinct functions, and that there is an evolutionary reason for this: if we realized (clearly) that we believe certain things for social reasons, and if we realized that we display our beliefs with overconfidence, we would be much less convincing to those we are trying to convince and impress.

Practical Implications of Our Nature

The preceding survey of the pitfalls and fallibilities of our minds is far from exhaustive, of course, but it shall suffice for our purposes. The bottom line is that we are creatures who want to see our pre-existing beliefs confirmed, and who tend to display excessive confidence in these beliefs. We do this in a social context, and many of the beliefs we hold serve social rather than epistemic functions, which include the tribal function of showing others how loyal we are to certain groups, as well as how worthy we are as friends and mates. In other words, we have a natural pull to impress our peers, not just with our behavior but also with our beliefs. And for socially strategic reasons, we are quite blind to the fact that we do this.

So what, then, is the upshot of all of this? A general implication, I submit, is that we have a lot to control for if we aspire to have reasonable beliefs, and our own lazy mind, with all its blindspots and craving for simple comfort, is not our friend in this endeavor. The fact that we are naturally biased and tendentious gives us reason to doubt our own beliefs and motives. More than that, it gives us reason to actively seek out the counter-perspectives and nuance that our confirmation bias so persistently struggles to keep us from accessing.

Reducing Our Biases

If we are to form accurate and nuanced perspectives, it seems helpful to cultivate an awareness of our biases, and to make an active effort to curb them.

Countering Confirmation Bias

To counteract our confirmation bias, it is critical to seek out viewpoints and arguments that challenge our pre-existing beliefs. We all cherry-pick data a little bit here and there in favor of our own position, and so by hearing from people with opposing views, and by examining their cherry-picked data and their particular emphases and interpretations, we will, in the aggregate, tend to get a more balanced picture of the issues at hand.

Important in this respect is that we engage with these other views in a charitable way, by assuming good faith on behalf of the proponents of any position; by trying to understand their view as well as possible; and by then engaging with the strongest possible version of that position — i.e. the steelman rather than the strawman version of it.

Countering Wishful Thinking

Our propensity for wishful thinking should make us skeptical of beliefs that are convenient and which match up with what we want to be true. If we want there to be a God, and we believe that there is one, then this should make us at least a little skeptical of this convenient belief. By extension, our attraction toward the wishful also suggests that we should pay more attention to information and arguments that are inconvenient or otherwise contrary to what we wish were true. Do we believe that the adoption of a vegan lifestyle would be highly inconvenient for us personally? Then we should probably expect to be more than a little biased against any argument in its favor; and if we suspect that the argument has merit, we will likely be inclined to ignore it altogether rather than giving it a fair hearing.

Countering Overconfidence Bias

When it comes to reducing our overconfidence bias, intellectual humility is a key virtue. That is, to admit and speak as though we have a limited and fallible perspective on things. In this respect, it also helps to be aware of the social motives that might be driving our overconfidence much of the time, such as the motive of convincing others or to signal our traits and loyalties. These social functions of confidence give us reason to update away from bravado and toward being more measured.

Countering In-Group Conformity

As hinted above, our beliefs are subject to in-group favoritism, which highlights the importance of being (especially) skeptical of the beliefs we share with groups that we affiliate closely with, while being extra charitable toward the beliefs held by the notional out-group. Likewise, it is worth being aware that our minds often paint the out-group in an unfairly unfavorable light, viewing them as much less sincere and well-intentioned than they actually are.

Thinking in Degrees of Certainty

Many of us have a tendency to express our views in a very binary, 0-or-1 fashion. We tend to be either clearly for something or clearly against it, be it abortion, efforts to prevent climate change, or universal health care. And it seems that what we express outwardly is generally much more absolutist, i.e. more purely 0 or 1, than what happens inwardly — underneath our conscious awareness — where there is probably more conflicting data than what we are aware of and allow ourselves to admit.

I have observed this pattern in conversations: people will argue strongly for a given position that they continue to insist on until, quite suddenly, they say that they accept the opposite conclusion. In terms of their outward behavior, they went from 0 to 1 quite rapidly, although it seems likely that the process that took place underneath the hood was much more continuous — a more gradual move from 0 to 1.

An extreme example of similar behavior found in recent events is that of Omarosa Manigault Newman, who was the Director of African-American Outreach for Donald Trump’s presidential campaign in 2016. She went from describing Trump in adulating terms, calling him “a trailblazer on women’s issues”, to being strongly against him and calling him a racist and a misogynist. It seems unlikely that this shift was based purely on evidence she encountered after she made her adulating statements. There probably was a lot of information in her brain that contradicted those flattering statements, but which she ignored and suppressed. And the reason why is quite obvious: she had a political motive. She needed to broadcast the message that Trump was a good person to further a campaign and to further her own career tied to this campaign. It was about signaling first, not truth-tracking.

The important thing to realize, of course, is that this applies to all of us. We are all inclined to be more like a politician than a scientist in many situations. In particular, we are all inclined to believe and express either a pure 0 or a pure 1 for social reasons.

Fortunately, there is a corrective for our tendency toward 0-or-1 thinking, which is to think in terms of credences along a continuum, ranging from 0 to 1. Thinking in these terms can help make our expressed beliefs more refined and more faithful to the potentially contradicting information found in our brains. Additionally, such graded thinking may help subvert the tribal aspect of our either-or thinking, by moving us away from a framework of binary polarity and instead placing us all in the same boat: the boat of degrees of certainty, in which the only thing that differs between us is our level of certainty in any given claim.

Such an honest, more detailed description of one’s beliefs is not good for keeping groups divided by different beliefs. Indeed, it is good for the exact opposite: it helps us move toward a more open and sincere conversation about what we in fact believe and why, regardless of our group affiliations.

Different Perspectives Can Be Equally True

There are two common definitions of the term “perspective” that are quite different, yet closely related. One is “a mental outlook/point of view”, while the other is “the art of representing three-dimensional objects on a two-dimensional surface”. These definitions are related in that the latter can be viewed as a metaphor for the former: our particular perspective, the representation of the world we call our point of view, is in a sense a limited representation of a more complex, multi-dimensional reality — a representation that is bound to leave out a lot of information about the world at large. The best we can do, then, is to try to paint the canvas of our mind so as to make it as rich and informative as possible about the complex and many-faceted world we inhabit.

An important point for us to realize in our quest for more balanced and nuanced views, as well as for the betterment of human conversation, is that seemingly conflicting reports of different perspectives on the same underlying reality can all be true, as hinted by the following illustration:


The same object can have different reflections when viewed from different angles. Similarly, the same events can be viewed very differently by different people who each have their own unique dispositions and prior experiences. And these different views can all be true; John really does see X when he looks at this event, while Jane really does see Y. And, like the different reflections shown above, X and Y need not be incompatible. (A similar sentiment is reflected in the Jain doctrine of Anekantavada.)

And even when someone does get something wrong, they may nonetheless still be truthfully reporting the appearance of the world as it is revealed to them. For example, to many of us, it really does seem as though the lines in the following picture are not parallel, although they in fact are:


This is merely to state the obvious point that it is possible, indeed quite common, to be perfectly honest and wrong at the same time, which is worth keeping in mind when we engage with people whom we think are obviously wrong; they usually think that they are right, and that we are obviously wrong — and perhaps even dishonest.

Another important point the visual illusion above hints at is that we should be careful not to confuse external reality with our representation of it. Our conscious experience of the external world is not, obviously, the external world itself. And yet we tend to speak as though it were.

This is no doubt an evolutionarily adaptive illusion, but it is an illusion nonetheless. All we ever inhabit is, in the words of David Pearce, our own world-simulation, a world of conscious experience residing in our head. And given that we all find ourselves stuck in — or indeed as — such separate, albeit mutually communicating bubbles, it is not so strange that we can have so many disagreements about what reality is like. All we have to go on is our own private, phenomenal cartoon model of each other and the world at large; a cartoon model that may get many things right, but which is also sure to miss a lot of important facts and phenomena.

Framing Shapes Our Perspective

From the vantage point of our respective world-simulations, we each interpret information from the external world with our own unique framing. And this framing partly determines how we will experience what we observe, as demonstrated by the following illustration, where one can change one’s framing so as to either see a duck or a rabbit:

Image result for duck rabbit

Sometimes, as in the example above, our framing is readily alterable. In other cases, however, it can be more difficult to just switch our framing, as when it comes to how people with different life experiences will naturally interpret the same scenario. After all, we each experience the world in different ways, due to our unique biological dispositions and life experiences. And while differing outlooks are not necessarily incompatible, it can still be challenging to achieve mutual understanding between perspectives that are shaped by very different experiences and cultural backgrounds.

Acknowledging Many Perspectives Is Not a Denial of Truth

None of the above implies the relativistic claim that there are no truths. On the contrary, the above implies that it is itself a truth that different individuals can have different perspectives and experiences in reaction to the same external reality, and that it is possible for such differing perspectives to all have merit, even if they appear to be in tension with each other. This middle-position of rejecting both the claim that there is only one valid perspective and the claim that there are no truths is, I submit, the only reasonable one on offer.

The fact that there can be merit in a plurality of perspectives implies that, beyond conceiving of our credences along a continuum ranging from 0 to 1, we also need to think in terms of a diversity of continua in a more general sense if we are to gain a nuanced understanding that does justice to reality, including the people around us with whom we interact. More than just thinking in terms of shades of grey found in between the two endpoints of black and white, we need to think in terms of many different shades of many different colors.

At the same time, it is also important to acknowledge the limits of our understanding of other minds and of experiences that we have not had. This does not amount to some obscure claim about how we each have our own, wholly incommensurable experiences, and hence that mutual understanding between individuals with different backgrounds is impossible. Rather, it is simply to acknowledge that psychological diversity is real, which implies that we should be careful to avoid the so-called typical mind fallacy (i.e. the mistake of thinking that other minds work just like our own). More than that, admitting the limits of our understanding of others’ experiences is to acknowledge that at least some experiences just cannot be conveyed faithfully with words alone to those who have not had them. For example, most of us have never tried experiencing extreme forms of suffering, such as the experience of being burned alive, and hence we are largely ignorant about the nature of such experiences.

However, this realization that we do not know what certain experiences are like is itself an important insight that expands and advances our outlook. For it at least helps us realize that our own understanding, as well as the variety of experiences that we are familiar with, are quite limited. With this realization in mind, we can look upon a state of absolute horror and admit that we have virtually no understanding of just how bad it is, which I submit represents a significantly greater level of understanding than does beholding such a state of horror without acknowledging of our lack of comprehension. The realization that we are ignorant itself constitutes knowledge of sorts — the kind of knowledge that makes us rightfully humble.

Grains of Truth in Different Perspectives

Even when two different perspectives are in conflict with each other, this does not imply that they are both entirely wrong, as there can still be significant grains of truth in both of them. Most of today’s popular narratives make a wide range of claims and arguments, and even if not all of these stand up to scrutiny, many of them arguably do. And part of being charitable is to seek out such grains of truth in positions that one does not agree with. This can also help give us a better sense of which realities and plausible claims might motivate people to support (what we consider) misguided views, and thus help advance mutual understanding.

As mentioned earlier, it is possible for different perspectives to support what seem to be very different positions on the same subject without necessarily being wrong; if they have different lenses, looking in different directions. Indeed, different perspectives on the same issue are often merely the result of different emphases that each focus on certain framings and sets of data rather than others. This is, I believe, a common pattern in human conversation: when different views on the same subject are all mostly true, yet where each of them merely constitute a small piece of the full picture — a pattern that further highlights the importance of seeking out different perspectives.

Having made a general case for nuance, let us now turn our gaze toward our time in particular, and why it is especially important to actively seek to be nuanced and charitable today.

Our Time Is Different: The Age of Screen Communication

Every period in history likely sees itself as uniquely special. Yet in terms of how humanity communicates, it appears that our time indeed is a highly unique one. Never before in history has human communication been so screen-based as it is today, which has significant implications for how and what we communicate.

Our brain seems to process communication through a screen in a very different way than it processes face-to-face communication. Writing a message in a Facebook group consisting of a thousand people does not, for most of us, feel remotely the same as delivering an equivalent speech in front of a live audience of a thousand people. And a similar discrepancy between the two forms of communication is found when we interact with just a single person. This is no wonder. After all, communication through a screen consists of a string of black and white symbols. Face-to-face interaction, in contrast, is composed of multiple streams of information; we read off important cues from a person’s face and posture, as well as from the tone and pace of their voice.

All this information provides a much more comprehensive — one might say more nuanced — picture of the state of mind of the person that we are interacting with. We get the verbal content of the conversation (as we would through a screen), plus a ton of information about the emotional state of the person who communicates. And beyond being informative, this information also serves the purpose of making the other person relatable. It makes the reality of their individuality and emotions almost impossible to deny, which is much less true when we communicate through a screen.

Indeed, it is as though these two forms of communication activate quite different sets of brain circuits — not only in that we communicate via a much broader bandwidth and likely see each other as more relatable when we communicate face-to-face, but also in that face-to-face communication naturally motivates us to be civil and agreeable. When we are in the direct physical presence of someone else, we have a strong interest in keeping things civil enough to allow our co-existence in the same physical space. Yet when we interact through a screen, this is no longer a necessity.

These differences between the two forms of communication give us reason to try to be especially nuanced when communicating through screens, not least because written communication through a screen makes it easier than ever before to paint the out-group antagonists whom we interact with in an unreasonably unfavorable light.

Indeed, our modern means of communication arguably make it easier than ever before to not interact with the out-group at all, as the internet has made it possible for us to diverge into our own respective echo chambers to an extent not possible in the past. It is thus easy to end up in communities in which we continuously echo data that supports our own narrative, which ultimately gives us a one-sided and distorted picture of reality. And while it may also be easier than ever before to find counter-perspectives if we were to look for them, this is of little use if we mostly find ourselves collectively indulging in our own in-group confirmation bias, as we often do. For instance, feminists may find themselves mostly informing each other about how women are being discriminated against, while men’s rights activists may disproportionally share and discuss ways in which men are being discriminated against. And so by joining only one of these communities, one is likely to end up with a skewed and insufficiently nuanced picture of reality.

With all the information we have reviewed thus far in mind, let us now turn to some concrete examples of heated issues that divide people today, and where more nuanced perspectives and a greater commitment to being charitable seem desperately needed. I should note that, given the brevity of the following remarks, what I write here on these issues is itself bound to fail to express a highly nuanced perspective, as that would require a longer treatment. Nonetheless, the following brief remarks will at least gesture at some ways in which we can try to be more nuanced about these topics.

Sex Discrimination

As hinted above, there are different groups that seem to tell very different stories about the state of sex discrimination in our world today. On the one hand, there are feminists who seem to argue that women generally face much more discrimination than men, and on the other, there are men’s rights activists who seem to argue that men are, at least in some parts of the world, generally the more discriminated sex. These two claims surely cannot both be right, can they?

If one were to define sex discrimination in terms of some single general measure, a “General Discrimination Factor”, then they could not both be right. Yet if one instead talks about concrete forms of discrimination, it is entirely possible, and indeed clearly the case, that women face more discrimination than men in some respects, while men face more discrimination in other respects. And it is arguably also more fruitful to talk about such concrete cases than it is to talk about discrimination “in general”. (In response to those who think that it is obvious that women face more discrimination in every domain, I would recommend watching the documentary The Red Pill, and for a more academic treatment, reading David Benatar’s The Second Sexism.)

For example, it is a well-known fact that women have, historically, been granted the right to vote much later than men have, which undeniably constitutes a severe form of discrimination against women. Similarly, women have also historically been denied the right to pursue a formal education, and they still are in many parts of the world. In general, women have been denied many of the opportunities that men have had, including access to professions in which they were clearly more than competent to contribute. These are all undeniable facts about undeniably serious forms of discrimination.

However, tempting as it may be to infer, none of this implies that men have not also faced severe discrimination in the past, nor that they avoid such discrimination today. For example, it is generally only men who have been subject to military conscription, i.e. forced duty to enlist in the military. Historically, as well as today, men have — in vastly disproportional numbers — been forced by law to join the military and to go to war, often without returning. (As a side note, it is worth noting that many feminists have criticized conscription.)

Thus, on a global level, it is true to say that, historically as well as today, women have generally faced more discrimination in terms of their rights to vote and to pursue an education, as well as in their professional opportunities, while men have faced more discrimination in terms of state-enforced duties.

Different forms of discrimination against men and women are also present at various other levels. For example, in one study where the same job application was sent to different scientists, and where half of the applications had a female name on them, the other half a male name, the “female applicants” were generally rated as less competent, and the scientists were willing to pay the “male applicants” more than 14 percent more.

The same general pattern seems reported by those who have conducted a controlled experiment in being a man and a woman from the “inside”, namely from transgender men (those who have transitioned from being a woman to being a man). Many of these men report being viewed as more competent after their transition, as well as being listened to more and interrupted less. This fits with the finding that both men and women seem to interrupt women more than they interrupt men.

At the same time, many of these transgender men also report that people seem to care less about them now that they are men. As one transgender man wrote about the change in his experience:

What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.

Such anecdotal reports seem in line with the finding that both men and women show more aggression toward men than they do toward women, as well as with recent research led by social psychologist Tania Reynolds, which among other things found that:

… female harm or disadvantage evoked more sympathy and outrage and was perceived as more unfair than equivalent male harm or disadvantage. Participants more strongly blamed men for their own disadvantages, were more supportive of policies that favored women, and donated more to a female-only (vs male-only) homeless shelter. Female participants showed a stronger in-group bias, perceiving women’s harm as more problematic and more strongly endorsed policies that favored women.

It thus seems that men and women are generally discriminated against in different ways. And it is worth noting that these different forms of discrimination are probably in part the natural products of our evolutionary history rather than some deliberate, premeditated conspiracy (which is obviously not to say that they are ethically justified).

Yet deliberation and premeditation is exactly what is required if we are to step beyond such discrimination. More generally, what seems required is that we get a clearer view of the ways in which women and men face discrimination, and that we then take active steps toward remedying these problems — something that is only possible if we allow ourselves enough of a nuanced perspective to admit that both women and men are subject to serious discrimination.


It seems that many progressives are inspired by the theoretical framework called intersectionality, according to which we should seek to understand many aspects of the modern human condition in terms of interlocking forms of power, oppression, and privilege. One problem with relying on this framework is that it can easily become like only seeing nails when all one has is a hammer. If one insists on understanding the world predominantly in terms of oppression and social privilege, one risks overemphasizing its relevance in many cases — and, by extension, to underemphasize the importance of other factors.

As with most popular ideas, there is no doubt a significant grain of truth in some of what intersectional theory talks about, such as the fact that discrimination is a very real phenomenon, that privilege is too, and that both of these phenomena can compound. Yet the narrow focus on social explanations and versions of these phenomena means that intersectional theory misses a lot about the nature of discrimination and privilege. For example, some people are privileged to be born with genes that predispose them to be significantly happier than average, while others have genes that dispose them to have chronic depression. Such two people may be of the same race, gender, and sexuality, and they may be equally able-bodied. Yet such two people will most likely have very different opportunities and qualities of life. A similar thing can be said about genetic differences that predispose individuals to have a higher or lower IQ, and about genetic differences that make people more or less physically attractive.

Intersectional theory seems to have very little to say about such cases, even as these genetic factors seem able to impact opportunities and quality of life to a similar degree as do discrimination and social exclusion. Indeed, it seems that intersectional theory actively ignores, or at the very least underplays, the relevance of such factors — what may be called biological privileges — perhaps because they go against the tacit assumption that inequity must be attributable to oppressive agents or social systems in some way, as opposed to just being the default outcome that one should expect to find in an apathetic universe.

It seems that intersectional theory significantly underestimates the importance of biology in general, which is, of course, by no means a mistake that is unique to intersectional theory. And it is quite understandable how such an underestimation can occur. For the truth is that many human traits, including those of personality and intelligence, are strongly influenced by both genetic and environmental factors. Indeed, around 40-60 percent of the variance in such traits tends to be explained by genetics, and, consequently, the amount of variance explained by the environment lies roughly in this range as well. This means that, with respect to these traits, it is both true to say that cultural factors are extremely significant, and that biological factors are extremely significant. The mistake that many seem to make, including many proponents of intersectionality, is to think that one of these truths rules out the other.

More generally, a case can be made that intersectional theory greatly overemphasizes group membership and identities in its analyses of societal problems. As Brian Tomasik notes:

… I suspect it’s tempting for our tribalistic primate brains to overemphasize identity membership and us-vs.-them thinking when examining social ills, rather than just focusing on helping people in general with whatever problems they have. For example, I suspect that one of the best ways to help racial minorities in the USA is to reduce poverty (such as through, say, universal health insurance), rather than exploring ever more intricate nuances of social-justice theory.

A final critique I would direct at mainstream intersectional theory is that, despite its strong focus on unjustified discrimination, it nonetheless generally fails to acknowledge and examine what is, I have argued, the greatest, most pervasive, and most harmful form of discrimination that exists today, namely speciesism — the unjustified discrimination against individuals based on their species membership. This renders mainstream versions of intersectionality a glaring failure as a theory of discrimination against vulnerable individuals.

Political Correctness

Another controversial issue closely related to intersectionality is that of political correctness. What do we mean by political correctness? The answer is actually not straightforward, since the term has a rather complex history throughout which it has had many different meanings. One sense of the term refers simply to conduct and speech that embodies fairness and common decency toward others, especially in a way that avoids offending particular groups of people. In this sense of the term, political correctness is, among other things, about not referring to people with ethnic or homophobic slurs. A more recent sense of the term, in contrast, refers to instances where such a commitment to not offend people has been taken too far (in the eyes of those who use the term), which is arguably the sense in which it is most commonly used today.

This then leads us to what seems the main point of contention when it comes to political correctness, namely: what is too far? What does the optimal level of decency entail? The only sensible answer, I believe, will have to be a nuanced one found between the two extremes of “nothing is too offensive” and “everything is too offensive”.

Some seem to approach this subject with the rather unnuanced attitude that feelings of being offended do not matter in any way whatsoever. Yet this view seems difficult to maintain, at least if one is called a pejorative name in an unjoking manner oneself. For most people, such name-calling is likely to hurt — indeed, it may hurt quite a lot. And significant amounts of hurt and unpleasantness do, I submit, matter. A universe with fewer, less intense feelings of offense is, other things being equal, better than a universe with more, more intense feelings of offense.

Yet the words “other things being equal” should not be missed here. For the truth is that there can be, indeed there clearly is, a tension between 1) risking to offend people, and 2) talking freely and honestly about the realities of life. And it is not clear what the optimal balance is.

What is quite clear, I would argue, is that if we cannot talk in an unrestricted way about what matters most in life, then we have gone too far. In particular, if we cannot draw distinctions between different kinds of discrimination and forms of suffering, and if we are not allowed to weigh these ills against each other to assess which are most urgent, then we have gone too far. If we deny ourselves a clear sense of proportion with respect to the problems of the world, we end up undermining our ability to sensibly prioritize our limited resources in a world that urgently demands reasonable prioritization. This is too high a price to pay to avoid the risk of offending people.

Politics and Making the World a Better Place

The subjects of politics and “how to make the world a better place” more generally are both subjects on which people tend to have strong convictions, limited nuance, and powerful incentives to signal group loyalties. Indeed, they are about as good examples as any of subjects where it is important to be charitable and to actively seek out nuance, as well as to acknowledge our own biased nature.

A significant step we can take toward thinking more clearly about these matters is to adopt the aforementioned virtue of thinking in terms of continuous credences. Having a “merely” high credence in any given political ideology, principle, or policy is likely more conducive to honest and constructive conversations than is a position of perfect conviction.

If nothing else, the fact that the world is so complex implies that there is considerable uncertainty about what the consequences of our actions will be. In many cases, we simply cannot know with great certainty which policy or candidate is ultimately going to be best (relative to any set of plausible values). This suggests that our strong convictions about how a given political candidate or policy is all bad, and about how immeasurably greater the alternatives would be, are likely often overstated. More broadly, it implies that our estimates regarding which actions are best to take, in the realm of politics in particular and with respect to improving the world in general, should probably be more measured and humble than they tend to be.

A related pitfall worth avoiding is that of believing a single political candidate or policy to have purely good or purely bad effects; such an outcome seems extraordinarily unlikely. Similarly, it is worth steering clear of the tendency to look to a single intellectual for the answers to all important questions. The truth is that we all have blindspots and false beliefs. Indeed, no single person can read and reflect widely and deeply enough to be an expert on everything of importance. Expertise requires specialization, which means that we must look to different experts if we are to find expert views on a wide range of topics. In other words, the quest for a more complete and nuanced outlook requires us to engage with many different thinkers spanning a wide range of disciplines.

Can We Have Too Much Nuance?

In a piece that argues for the virtues of being nuanced, it seems worth asking whether I am being too one-sided. Might I not be overstating the case in its favor, and should I not be a bit more nuanced about the utility of nuance itself? Indeed, might we not be able to have too much nuance in some cases?

I would be the first to admit that we probably can have too much nuance in many cases. I will grant that in situations that call for quick action, and where there is not much time to build a nuanced perspective, it may often be better to act on one’s limited understanding rather than a more nuanced, yet harder-won picture. However, at the level of our public conversations, this is not typically the case. In that context, we usually do have time to build a more nuanced picture, and we are rarely required to act promptly. Indeed, we are rarely required to act at all, and perhaps it is generally better to abstain from expressing our views on a given hot topic if we have not made much of an effort to understand it.

One could perhaps attempt to make a case against nuance with reference to examples where near-equal weight is granted to all considerations and perspectives — reasonable and less reasonable ones alike. This, one may argue, is a bad thing, and surely demonstrates that there is such a thing as too much nuance. Yet while I would agree that weighing arguments blindly and undiscerningly is unreasonable, I would not consider this an example of too much nuance as such. For being nuanced does not mean giving equal weight to all arguments regardless of their plausibility. Instead, what it requires is that we at least consider a wide range of arguments, and that we acknowledge whatever grains of truth that these arguments might have, but without overstating their degree of truth or plausibility.

Another objection one may be tempted to raise against being nuanced and charitable is that it implies that we should be submissive and over-accommodating. This does not follow, however. To say that we have reason to be nuanced and charitable is not to say that we cannot be firm in our convictions when such firmness is justified, much less that we should ever tolerate disrespect or unfair treatment. We have no obligation to indulge bullies and intimidators, and if someone repeatedly fails to act in a respectful, good-faith manner, we have every right to remove ourselves from them. After all, the maxim “assume the other person is acting in good faith” in no way prevents us from updating this assumption as soon as we encounter evidence that contradicts it. And to assert one’s boundaries and self-respect in light of such updating is perfectly consistent with a commitment to being charitable.

A more plausible critique against being nuanced is that it might sometimes be strategically unwise, and that advocating one’s ideas in a decidedly unnuanced and polemic manner might be better for achieving certain aims. I think this may well be true. Yet I think there are also good reasons to think that this will rarely be the optimal strategy when engaging in public conversations, especially in the long run. First of all, we should acknowledge that, even if we were to grant that an unnuanced style of communication is superior in some situations, it still seems advantageous to possess a nuanced understanding of the arguments against one’s own views. If nothing else, such an understanding would seem to make one better able to rebut these arguments, regardless of whether one then does so in a nuanced way or not.

In addition to this reason to acquire a nuanced understanding, there are also good reasons to express such an understanding, as well as to treat counter-arguments in a fair and measured way. One reason is the possibility that we might ourselves be wrong, which means that, if we want an honest conversation through which we can make our beliefs converge toward what is most reasonable, then we ourselves also have an interest in seeing the best and most unbiased arguments for and against different views. And hence we ourselves have an interest in moderating our own bravado and confirmation bias that actively keep us from evaluating our pre-existing beliefs as impartially as we ideally should.

Beyond that, there are reasons to believe that people will be more receptive to one’s arguments if one communicates them in a way that demonstrates a sophisticated understanding of relevant counter-arguments, and which lays out opposing views as strongly as possible. This will likely lead people to conclude that one’s perspective is at least built in the context of a sophisticated understanding, which might be read as an honest signal that this perspective may be worth listening to.

Finally, one may object that some subjects just do not call for any nuance whatsoever. For example, should we be nuanced about the Holocaust? This is a reasonable point. Yet even here, I would argue that nuance is still important, in various ways. For one, if we do not have a sufficiently nuanced understanding of the Holocaust, we risk failing to learn from it. For example, to simply believe that the Germans were evil would appear the dangerous thing, as opposed to realizing that what happened was the result of primitive tendencies that we all share, as well as the result of a set of ideas which had a strong appeal to the German people for various reasons — reasons that are worth understanding.

This is all descriptive, however, and so none of it implies taking a particularly nuanced stance on the ethical status of the Holocaust. Yet even in this respect, a fearless search for nuance and perspective can still be of great importance. In terms of the moral status of historical events, for instance, we should have enough perspective to realize that the Holocaust, although it was the greatest mass killing of humans in history, was by no means the only one; and hence that its ethical status is arguably not qualitatively unique compared to other similar events of the past. Beyond that, we should admit that the Holocaust is not, sadly, the greatest atrocity imaginable, neither in terms of the number of victims it had, nor in terms of the horrors imposed on its victims. Greater atrocities than the Holocaust are imaginable. And we ought to both seriously contemplate whether such atrocities might indeed be actual, as well as to realize that there is a risk that atrocities that are much greater still may emerge in the future.


Almost everywhere one finds people discussing contentious issues, nuance and self-scrutiny seem to be in short supply. And yet the most essential point of this essay is not about looking outward and pointing fingers at others. Rather, the point is that if we wish to form more accurate and nuanced perspectives, we need to look in the mirror and ask ourselves some uncomfortable questions.

“How might I be obstructing my own quest for truth?”

“How might my own impulse to signal group loyalty bias my views?”

“What beliefs of mine are mostly serving social rather than epistemic functions?”

We need to remind ourselves of the value of seeking out the grains of truth that may exist in different perspectives so that we can gain a more nuanced understanding that better reflects the true complexity of the world. We need to remind ourselves that our brains evolved to express overconfident and unnuanced views for social reasons — especially in ways that favor our in-group and oppose our out-group. And we need to do a great deal of work to control for this.

None of us will ever be perfect in these regards, of course. Yet we can at least all strive to do better.

The Endeavor of Reason

“[…] some hope a divine leader with prophetic voice
Will rise amid the gazing silent ranks.
An idle thought! There’s none to lead but reason,
To point the morning and the evening ways.”

— Abu al-ʿAlaʾ al-Maʿarri

What is reason?

One could perhaps say that answering this question itself falls within the purview of reason. But I would simply define reason as the capacity of our minds to decide or assess what makes the most sense, or seems most reasonable, all things considered.

This seems well in line with other definitions of reason. For instance, Google defines reason as “the power of the mind to think, understand, and form judgements logically”, and Merriam-Webster gives the following definitions:

(1) the power of comprehending, inferring, or thinking[,] especially in orderly rational ways […] (2) proper exercise of the mind […]

These definitions all seem to raise the further question of what terms like “logically”, “orderly rational ways”, and “proper” then mean in this context.

Indeed, one may accuse all these definitions of being circular, as they merely seem to deflect the burden of defining reason by referring to some other notion that ultimately just appears synonymous with, and hence does not reductively define, reason. This would also seem to apply to the definition I gave above: “the ability to decide or assess what seems most reasonable all things considered”. For what does it mean for something to “seem most reasonable”?

Yet the open-endedness of this definition does not, I submit, render it useless or empty by any means, any more than defining science in open-ended terms such as “the attempt to discover what is true about the world” renders this definition useless or empty.

Reason: The Core Value of Universities and the Enlightenment

At the level of ideals, working out what seems most reasonable all things considered is arguably the core goal of both the Enlightenment and of universities. For instance, ideally, universities are not committed to a particular ethical view (say, utilitarianism or deontology), nor to a particular view of what is true about the world (say, string theory or loop quantum gravity, or indeed physicalism in general).

Rather, universities seem to have a more fundamental and less preconceived commitment, at least in the ideal, which is to find out which particular views, if any, that seem the most plausible in the first place. This means that all views can be questioned, and that one has to provide reasons if one wants one’s view to be considered plausible. 

And it is important to note in this context that “plausible” is a broader term than “probable”, in that the latter pertains only to matters of truth, whereas the former covers this and more. That is, plausibility can also be assigned to views, for instance ethical views, that we do not view as strictly true, yet which we find plausible nonetheless (as in: they seem agreeable or reasonable to us).

For this very reason, it would also be problematic to view the fundamental role of universities to (only) be the uncovering of what is true, as such a commitment may assume too much in many important and disputed academic discussions, such as those about ethics and epistemology, where the question of whether there indeed are truths in the first place, and in what sense, is among the central questions that are to be examined by reason. Yet in this case too, the core commitment remains: a commitment to being reasonable. To try to assess and follow what seems most reasonable all things considered.

This is arguably also the core value of the Enlightenment. At least that seems to be what Immanuel Kant argued for in his essay “What Is Enlightenment“, in which he further argued that free inquiry — i.e. the freedom to publicly exercise our capacity for reason — is the only prerequisite for enlightenment:

This enlightenment requires nothing but freedom—and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters.

And the view that reason should be our core commitment and guide of course dates much further back historically than the Enlightenment. Among the earliest and most prominent advocates of this view was Aristotle, who viewed a life lived in accordance with reason as the highest good.

Yet who is to say that what we find most plausible or reasonable is something we will necessarily be able to converge upon? This question itself can be considered an open one for reasoned inquiry to examine and settle. Kant, for instance, believed that we would all be able to agree if we reasoned correctly, and hence that reason is universal and accessible to all of us.

And interestingly, if one wants to make a universally compelling case against this view of Kant’s, it seems that one has to assume at least some degree of the universality that Kant claimed to exist. And hence it seems difficult, not to say impossible, to make such a case, and to deny that at least some aspects of reason are universal.

Being Reasonable: The Only Reasonable Starting Point?

One can even argue that it is impossible to make a case against reason in general. For as Steven Pinker notes:

As soon as we are having this conversation, as long as we are trying to persuade one another of why you should do something or should believe something, you are already committed to reason. We are not engaged in a fist fight, we are not bribing each other to believe something. We are trying to provide reasons. We are trying to persuade, to convince. As long as you are doing that in the first place — you are not hitting someone with a chair, or putting a gun to their head, or bribing them to believe something — you have lost any argument you have against reason; you have already signed on to reason, whether you like it or not. So the fact that we are having this conversation shows that we are committed to reason. That is the starting point.

Indeed, it seems that any effort to make a case against reason would have to rest on the very thing it attempts to question, namely our capacity to assess and justify the merits of our claims. Thus, almost by definition, it seems impossible to identify a reasonable alternative to the endeavor of reason.

Some might argue that reason requires us to have faith in reason in the first place, and hence that reason is ultimately no more reasonable or defensible than is faith in anything whatsoever. Yet this is not the case.

First, we should be clear that faith and reason, as commonly conceived, are polar opposites. Reason seeks justification for our beliefs, whereas faith — holding beliefs without justification — represents the very negation of this search. Second, unlike faith, reason is not in fact committed to accepting any particular claim from the outset: everything can in principle be doubted and scrutinized, including the very soundness of reason itself, even if such doubt ultimately proves incoherent and untenable.

What, then, can ultimately justify any claim?

It Seems Reasonable: The Bedrock Foundation of Reasonable Beliefs

The idea that reason demands justification for any given belief may seem problematic, as it gives rise to the so-called Münchhausen trilemma: what can ultimately justify our beliefs — a circular chain of justifications, an infinite chain, or a finite chain (or web) with brute facts at bottom? Supposedly, none of these options are appealing. Yet I disagree.

For I see nothing problematic about having a brute observation, or reason, at bottom of our chain of justification, which I would indeed argue is exactly what constitutes, and all that ever could constitute, the rock bottom justification for any reasonable belief. Specifically, that it just seems reasonable.

Many discussions go wrong here by conflating 1) ungrounded assumptions and 2) brute observations, which are by no means the same. For there is clearly a difference between believing that a car just drove by you based on the brute observation (i.e. a conscious sensation of) that a car just drove by you, and then merely assuming, without grounding in any reason or observation, that a car just drove by you.

Or consider another example: the fundamental constants in our physical equations. We ultimately have no deeper justification for the values of these constants than brute observation. Yet this clearly does not render our knowledge of these values merely assumed, much less arbitrarily or unjustifiably chosen. This is not to say that our observations of these values are infallible; future measurements may well yield slightly different, more precise values. Yet they are not arbitrary or unjustified.

The idea that brute observation cannot constitute a reasonable justification for a belief is, along with the idea that brute assumptions and brute observations are the same, a deeply misguided one, in my view. And this is not only true, I contend, of factual matters, but of all matters of reason, including ethics and epistemology, whether we deem these fields strictly factual or not. For instance, my own ethical view, according to which suffering is disvaluable and ought to be reduced, does not, on my account, rest on a mere assumption. Rather, it rests on a brute observation of the undeniable intrinsic disvalue of the conscious states we call suffering. I have no deeper justification than this, nor is a deeper one required or even possible.

As I have argued elsewhere, such a foundationalist account is, I submit, the solution to the Münschhausen trilemma.

Deniers of Reason

If reason is the only reasonable starting point, why, then, do so many seem to challenge and reject it? There are a few things to say in response to this. First, those who criticize and argue against reason are not really, as I have argued above, criticizing reason, at least not in the general sense I have defined it here (since to criticize reason is to engage in it). Rather, they are, at most, criticizing a particular conception of reason, and that can, of course, be perfectly reasonable (I myself would criticize prevalent conceptions of reason as being much too narrow).

Second, there are indeed those who do not criticize reason, and who indeed do reject it, at least in some respects. These are people who refuse to join the conversation Steven Pinker referred to above; people who refuse to provide reasons, and who instead engage in forceful methods, such as silencing or extorting others, violently or otherwise. Examples include people who believe in some political ideology or religion, and who choose to suppress, or indeed kill, those who express views that challenge their own. Yet such actions do not pose a reasonable or compelling challenge to reason, nor can they be considered a reasonable alternative to the endeavor of reason.

As for why people choose to engage in such actions and refuse to engage in reason, one can also say a few things. First of all, the ability to engage in reason seems to require a great deal of learning and discipline, and not all of us are fortunate enough to have received the schooling and discipline required. And even then, even when we do have these things, engaging in reason is still an active choice that we can fail to make.

That is, doing what we find most reasonable is not an automatic, reflexive process, but rather a deliberate volitional one. It is clearly possible, for example, to act against one’s own better judgment. To go with seductive impulse and temptation — e.g. for sex, a cigarette, or social status — rather than what seems most reasonable, even to ourselves in the moment of weakness.

Reason Broadly and Developmentally Construed

The conception of reason I have outlined here is, it should be noted, not a narrow one. It is not committed to any particular ontological position, nor is it purely cerebral, as in restricted to merely weighing verbal or mathematical arguments. Instead, it is open to questioning everything, and takes input from all sources.

Nor would I be tempted to argue that we humans have some single, immutable faculty of reason that is infallible. Quite the contrary. Our assessments of what seems most reasonable in various domains rests on a wide variety of faculties and experiences, virtually none of which are purely innate. Indeed, these faculties, as well as our range of experience, can be continually expanded and developed as we learn more, both individually and collectively.

In this way, reason, as I conceive of it, is not only extremely broad but also extremely open-ended. It is not static, but rather self-regulating and self-updating, as when we realize that our thinking is tendentious and biased in many ways, and that our motives might not be what we (would like to) think they are. In this way, our capacity for reasoning has taught itself that it should be self-skeptical.

Yet this by no means gives way to pure skepticism. After all, our discovery of these tendencies is itself a testament to the power of our capacity to reason. Rather than completely undermine our trust in this capacity, discoveries of this kind simultaneously show both the enormous weakness and strength of our minds: how wrong we can be when we are not careful to try to be reasonable, and how much better informed we can become if we are. Such facts do not comprise a case against employing our capacity to reason, but rather a case for even more, even more careful employments of this capacity of ours.

Conclusion: A Call for Reason

As noted above, the endeavor of reason is not one that we pursue automatically. It takes a deliberate choice. In order to be able to assess and decide what seems most reasonable all things considered, one must first make an active effort to learn as much as one can about the nature of the world, and then consider the implications carefully.

What I have argued here is that there is no reasonable alternative to doing this; not that there is no possible alternative. For one can surely suspend reason and embrace blind faith, as many religious people do, or embrace incoherent and self-refuting claims about reality, as many postmodernists do. Or one can go with whatever seems most pleasurable in the moment rather than what seems most reasonable all things considered, as we all do all too often. Yet one cannot reasonably choose such a suspension of reason.

In sum, I would join Aristotle in viewing reason, broadly construed, as our highest calling. Following what seems most reasonable all things considered seems the best choice before us.

The (Non-)Problem of Induction

David Hume claimed that it is:

[…] impossible for us to satisfy ourselves by our reason, why we should extend that experience beyond those particular instances, which have fallen under our observation. We suppose, but are never able to prove, that there must be a resemblance betwixt those objects, of which we have had experience, and those which lie beyond the reach of our discovery.

And this then gives rise to the problem of induction: how can we defend assuming the so-called uniformity of nature that we take to exist when we generalize our limited experience to that which lies “beyond the reach of our discovery”? For instance, how can we justify our belief that the world of tomorrow will, at least in many ways, resemble the world of yesterday? Indeed, how can we justify believing that there will be a tomorrow at all?

A thing worth highlighting in response to this problem is that, even if we were to assume that we have no justification for believing in such uniformity of nature, this would not imply, as may perhaps seem natural to suppose, that we thereby have justification for believing the opposite: that there is no uniformity of nature. After all, to say that the patterns we have observed so far do not predict anything about states and events elsewhere would also amount to a claim about that which lies “beyond the reach of our discovery”, and so this claim seems to face the same problem.

The claims 1) “there is a certain uniformity of nature” and 2) “there is no uniformity of nature” are both hypotheses about the world. And if we look at the limited part of the world about which we do have some knowledge, it is clear that the former hypothesis is true about it: patterns at one point in (known parts of) time and space do indeed predict a lot about patterns observed elsewhere.

Does this then mean that the same will hold true of the part of the world that lies beyond the reach of our discovery? One can reasonably argue that we do not have complete certainty that it will (indeed, one can reasonably argue that we should not have complete certainty about any claim our fallible mind happens to entertain; not even when it comes to claims about idealized formal systems, as there is always the possibility that we have failed to instantiate these formal systems properly). Yet if we reason as scientists — probabilistically, endeavoring to build the picture of the world that seems most plausible in light of all the available evidence — then it does indeed seem justifiable to say that hypothesis 1 seems much more likely to be true of that which lies “beyond the reach of our discovery” than does hypothesis 2; not least because to say that hypothesis 2 holds true would amount to assuming an extraordinary uniqueness of the observed compared to the unobserved, whereas believing hypothesis 1 merely amounts to not assuming such an extraordinary uniqueness.

And if we think in this way — in terms of competing hypotheses — then Hume’s problem of induction suddenly seems rather vacuous. “You cannot prove that any given hypothesis of this kind is correct.” This seems true (although the fact that we have not found such a proof yet does not imply that one cannot be found), but also quite irrelevant, since a deductive proof is not required in order for us to draw reasonable inferences. To say that we have no purely deductive argument for a given conclusion is not the same as saying that we have no justification for believing it (and if one thinks that it is, then one is also committed to the belief that we have no justification for believing, based on previous experience, that the problem of induction also exists in this very moment; more on this below).

Applying Hume’s Claim to Itself

According to Hume’s quote above, the belief that we can make generalizations based on particular instances can never be “satisfied by our reason”. The problem, however, is that, according to our modern understanding of the world in physical terms, all we ever can generalize from, including when we make deductive inferences, is particular instances — particular spatiotemporally located states and processes found in our brains (equivalently, one could also say that all we can ever generalize from, as knowing subjects, are particular states of our own minds).

Thus, Hume’s statement that we can never prove such generalizations must also apply to itself, as it is itself a general claim based on a particular instance of reasoning taking place in Hume’s head in a particular place and time (indeed, Hume’s claim would appear to pertain to all generalizations).

So what justification could Hume possibly provide for this general claim of his? According to the claim itself, no proof can be given for it. Indeed, if Hume could provide a proof for his claim that it is impossible to find a proof for the validity of generalizations based on particular instances, then he would have falsified his own claim, as such a proof is the very thing that the claim holds not to exist. And such an alleged proof would thereby also undermine itself, as what it supposedly shows is its own non-existence.

This demonstrates that Hume’s claim is unprovable. That is, based on this particular instance of reasoning, we can draw the general conclusion that we will never be able to provide a proof for Hume’s claim. And thereby we have in fact proven Hume’s claim wrong, as we have thus provided a proof for a general claim that also pertains to that which lies beyond the reach of our discovery. Nowhere, neither in the realm of the discovered nor the undiscovered, can a proof for Hume’s claim be found.

So we clearly can prove some general claims about that which lies beyond the reach of our experience based on particular instances (of processes in our brains, say), and hence the claim that we cannot is simply wrong.

Yet one may object that this conclusion does not contradict what Hume in fact meant when he claimed that we cannot prove the validity of generalizations based on particular instances, since what he meant was rather that we cannot prove the validity of inductive generalizations such as “we have observed X so far, hence X will also be the case in the next instance/in general” — i.e. generalizations whose generality seems impossible to prove.

The problem, however, is that we can also turn this claim on itself, and indeed turn the problem of induction altogether on itself, as we did in a parenthetical statement above: the mere fact that we have not been able to prove the validity of any inductive claims of this sort so far does not imply that such a proof can never be found. In particular, the claim that we cannot prove the validity of any such inductive claim that seems impossible to prove is itself an inductive claim whose generality seems impossible to prove (i.e. it seems to rest on the argument: “we have not been able to prove the validity of any inductive claim of this nature so far, and hence we cannot[/we will never be able to] prove the validity of such a claim”).

And if we accept that this claim, the very claim that gives rise to the problem of induction, is itself a plausible claim that we have good reason to accept in general (or at least just good reason to believe that it will apply in the next moment), then we indeed do believe that we can have good reason to draw (at least some plausible) non-deductive generalizations based on particular instances, which is the very thing Hume’s argument is often believed to cast doubt upon. In other words, in order to even believe that there is a problem of induction in the first place, one must already assume that which this problem is supposed to question and be a problem for.

Indeed, one can make an argument along these lines that it is in fact impossible to give a coherent argument against (the overwhelming plausibility of at least some degree of) the uniformity of nature. For in order to even state an argument or doubt against it, one is bound to rely thoroughly on the very thing one is trying to question. For instance, that words will still mean the same in the next moment as they did in the previous one; that the argument one thought of in the previous moment still applies in the next one; that the problem one was trying to address in the previous moment still exists in the next; etc.

Thus, it actually seems impossible to reasonably, indeed even coherently, doubt that the world has at least some degree of uniformity, which itself seems to constitute a good argument and reason for believing in such uniformity. After all, that something cannot reasonably be doubted, or indeed doubted at all, usually seems a more than satisfying standard for believing it.

So to reiterate: If one thinks we have good reason to take the problem of induction seriously, or indeed just to believe that this problem still exists in this moment (since it has in previous ones), then one also thinks that we do have good reason to make (at least some plausible) non-deductive generalizations about that which lies “beyond the reach of our discovery” based on particular instances. In other words, if one takes the problem of induction seriously, then one does not take the problem of induction seriously at all.

How to then draw the most plausible inferences about that which “lies beyond the reach of our discovery” is, of course, far from trivial. Yet we should be clear that this is a separate matter entirely from whether we can draw such plausible inferences at all. And as I have attempted to argue here, we have absolutely no reason to think that we cannot, and good reason to think that we can.

“The Physical” and Consciousness: One World Conforming to Different Descriptions

My aim in this essay is to briefly explain a crucial aspect of David Pearce‘s physicalist idealist worldview. In particular, I seek to explain how a view can be both “idealist” and “physicalist”, yet still be a “property monist” view.

Pearce himself describes his view in the following way:

“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions […]

So Pearce’s view is a monist, idealist view: reality is fundamentally experiential. And this reality also conforms to description in physical terms. Pearce is careful, however, to distinguish this view from panpsychism, which Pearce, in contrast to his own idealist view, considers a property dualist view:

“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Panpsychism is best treated as a form of property-dualism.

How, one may wonder, is Pearce’s view different from panpsychism, and from property dualist views more generally? This is something I myself have struggled a lot to understand, and inquired him about repeatedly. And my understanding is the following: according to Pearce, there is only consciousness, and its dynamics conform to physical description. Property dualist views, in contrast, view the world as having two properties: the stuff of the world has insentient physical properties to which separate, experiential properties are somehow attached.

Pearce’s view makes no such division. Instead, on Pearce’s view, description in physical terms merely constitutes a particular (phenomenal) mode of description that (phenomenal) reality conforms to. So to the extent there is a dualism here, it is epistemological, not ontological.

The Many Properties of Your Right Ear

For an analogy that might help explain this point better, consider your right ear. What properties does it have? Setting aside the question concerning its intrinsic nature, it is clear that you can model it in various ways. One way is to touch it with your fingers, whereby you model it via your faculties of tactile sensation (or in neuroanatomical terms: with neurons in your parietal lobe). You may also represent your ear via auditory sensations, for example by hitting it and noticing what kind of sound it makes (a sensation mediated by the temporal lobe). Another way, perhaps the clearest and most practical way for beings like us, is to model it in terms of visual experience: to look at your right ear in the mirror, or perhaps simply imagine it, and thereby have a visual sensation that represents it (mediated by the occipital lobe).

[For most of us, these different forms of modeling are almost impossible to keep separate, as our touching our ears automatically induces a visual model of them as well, and vice versa: a visual model of an ear will often be accompanied by a sense of what it would be like to touch it. Yet one can in fact come a surprisingly long way toward being able to “unbind” these sensations with a bit of practice. This meditation and this one both provide a good exercise in detaching one’s tactile sense of one’s hands from one’s visual model of them. This one goes even further, as it climaxes with a near-total dissolution of our automatic binding of different modes of experience into an ordered whole.]

Now, we may ask: which of these modes of modeling constitute the modeling we call “physical”? And the answer is arguably all of them, as they all relate to the manifestly external (“physical”) world. This is unlike, say, things that are manifestly internal, such as emotions and thoughts, which we do not tend to consider “physical” in this same way, although all our sensations are, of course, equally internal to our mind-brain.

“The physical” is in many ways a poorly defined folk term, and physics itself is not exempt from this ambiguity. For instance, what phenomenal mode does the field of physics draw upon? Well, it is certainly more than just the phenomenology of equations (to the extent this can be considered a separate mode of experience). It also, in close connection with how most of us think about equations, draws heavily on visuospatial modes of experience (I once carefully went through a physics textbook that covered virtually all of undergraduate level physics with the explicit purpose of checking whether it all conformed to such description, and I found that it did). And we can, of course, also describe your right ear in “physics” terms, such as by measuring and representing its temperature, its spatial coordinates, its topology, etc. This would give us even more models of your right ear.


The deeper point here is that the same thing can conform to description in different terms, and the existence of such a multitude of valid descriptions does not imply that the thing described itself has a multitude of intrinsic properties. In fact, none of the modes of modeling an ear mentioned above say anything about the intrinsic properties of the ear; they only relate to its reflection, in the broadest sense.

And this is where some people will object: why believe in any intrinsic properties? Indeed, why believe in anything but the physical, “reflective”, (purportedly) non-phenomenal properties described above?

To me, as well as to David Pearce (and Galen Strawson and many others), this latter claim is self-undermining and senseless, like a person reading from a book who claims that the paper of the book they are reading from does not exist, only the text does. All these modes of modeling mentioned above, including all that we deem knowledge of “the physical” are phenomenal. The science we call “physics” is itself, to the extent it is known by anyone, found in consciousness. It is a particular mode of phenomenal modeling of the world, and thus to deny the existence of the phenomenal is also to deny the existence of our knowledge of “physics”.

Indeed, our knowledge of physics and “the physical” attests to this fact as clearly as it attests to anything: consciousness exists. It is a separate question, then, exactly how the varieties of conscious experience relate to descriptions of the world in physical terms, as well as what the intrinsic nature of the stuff of the world is, to the extent it has any. Yet by all appearances, it seems that minds such as our own conform to physical description in terms of what we recognize as brains, and, as with the example of your right ear, such a physical description can take many forms: a visual representation of a mind-brain, what it is like to touch a mind-brain, the number of neurons it has, its temperature, etc.

These are different, yet valid ways of describing aspects of our mind-brains. Yet like the descriptions of different aspects of an ear mentioned above, these “physical” descriptions, while all perfectly valid, still do not tell us anything about the intrinsic nature of the mind-brain. And according to David Pearce, the intrinsic nature of that which we (validly) describe in physical terms as “your brain” is your conscious mind itself. The apparent multitude of aspects of that which we recognize as “brains” and “ears” are just different modes of conscious modeling of an intrinsically monist, i.e. experiential, reality.


The view of consciousness explored here may seem counter-intuitive, yet I have argued elsewhere that using waves as a metaphor can help render it less unintuitive, perhaps even positively intuitive.

Suffering, Infinity, and Universe Anti-Natalism

Questions that concern infinite amounts of value seem worth spending some time contemplating, even if those questions are of a highly speculative nature. For instance, if we assume a general expected value framework of a kind where we evaluate the expected value of a given outcome based on its probability multiplied by its value, then any more than an infinitesimal probability of an outcome that has infinite value would imply that this outcome has infinite expected value. And hence that the expected value of such an outcome would trump that of any outcome with a “mere” finite amount of value.

Therefore, on this framework, even strongly convinced finitists are not exempt from taking seriously the possibility that infinities, of one ethically relevant kind or another, may be real. For however strong a conviction one may hold, maintaining only an infinitesimal probability that infinite value outcomes of some sort could be real seems difficult to defend.

Bounding the Influence of Expected Value Thinking

It is worth making clear, as a preliminary note, that we may reasonably put a bound on how much weight we give such an expected value framework in our ethical deliberations, so as to avoid crazy conclusions and actions; or simply to preserve our sanity, which may also be a priority for some.

In fact, it is easy to point to good reasons for why we should constrain the influence of such a framework on our decisions. For although it seems implausible to entirely reject such an expected value framework in one’s moral reasoning, it would seem equally implausible to consider such a framework complete and exhaustive in itself. One reason being that thinking in terms of expected value is just one way to theorize about the world among many others, and it seems difficult to justify granting it a particularly privileged status among these, especially given a tool-like conception of our thinking: if all our thinking about the world is best thought of as a tool that helps us navigate in the world rather than a set of Platonic ideals that perfectly track truths in a transcendent way, it seems difficult to elevate a single class of these tools, such as thinking in terms of expected value, to a higher status than all others. But also given that we cannot readily put numbers on most things in practice, both due to a lack of time in most real-world situations and because, even when we do have time, the numbers we assign are often bound to be entirely speculative, if at all meaningful in the first place.

Just as we need more than theoretical physics to navigate in the physical world, it seems likely that we will do well to not only rely on an expected value framework to navigate the moral landscape, and this holds true even if all we care about is to maximize or minimize the realization of a certain class of states. Using only a single style of thinking makes us inherently vulnerable to mistakes in our judgments, and hence resting everything on one style of thinking without limits seems risky and unwise.

It therefore seems reasonable to limit the influence of this framework, and indeed any single framework, and one proposed way of doing so is by giving it only a limited number of the seats of one’s notional moral parliament; say, 40 percent of them. In this way, we should be better able to avoid the vulnerabilities of relying on a single framework, while remaining open to being guided by its inputs.

What Can Be the Case?

To get an overview, let us begin by briefly surveying (at least some of) the landscape of the conceivable possibilities concerning the size of the universe. Or, more precisely, the conceivable possibilities concerning the axiological size of the universe. For it is indeed possible, at least abstractly, for the universe to be physically finite, yet axiologically infinite; for instance, if some states of suffering are infinitely disvaluable, then a universe containing one or more of such states would be axiologically infinite, even if physically finite.

In fact, a finite universe containing such states could be worse, indeed infinitely worse, than even a physically infinite universe containing an infinite amount of suffering, if the states of suffering realized in the finite universe are more disvaluable than the infinitely many states of suffering found in the physically infinite universe. (I myself find the underlying axiological claim here more than plausible: that a single instance of certain states of suffering — torture, say — are more disvaluable than infinitely many instances of milder states of suffering, such as pinpricks.)

It is also conceivable that the universe is physically infinite, yet axiologically finite; if, for instance, our axiology is non-additive, if the universe contains only infinitesimal value throughout, or if only a freak bubble of it contains entities of value. This last option may seem impossibly unlikely, yet it is conceivable. Infinity does not imply infinite repetition; the infinite sequence ( 1, 0, 0, 0, … ) does not logically have to contain 1 again, and indeed doesn’t.

In terms of physical size, there are various ways in which infinity can be realized. For instance, the universe may be both temporally and spatially infinite in terms of its extension. Or it may be temporally bounded while spatially infinite in extension, or vice versa: be spatially finite, yet eternal. It should be noted, though, that these two may be considered equivalent, if we view only points in space and time as having value-bearing potential (arguably the only view consistent with physicalism, ultimately), and view space and time as a four-dimensional structure. Then one of these two universes will have infinite “length” and finite “breadth”, while the opposite is true of the other one, and a similar shape can thus be obtained via “90 degree” rotation.

Similarly, it is also conceivable (and perhaps plausible) that the universe has a finite past and an infinite future, in which case it will always have a finite age, or it could have an infinite past and a finite future. Or, equivalently in spatial terms, be bounded in one spatial direction, yet have infinite extension in another.

Yet infinite extension is not the only conceivable way in which physical infinity may conceivably be realized. Indeed, a bounded space can, at least in one sense, contain more elements than an unbounded one, as exemplified by the cardinality of the real numbers in the interval (0, 1) compared to all the natural numbers. So not only might the universe be infinite in terms of extension, but also in terms of its divisibility — i.e. in terms of notional sub-worlds we may encounter as we “zoom down” at smaller scales — which could have far greater significance than infinite extension, at least if we believe we can use cardinality as a meaningful measure of size in concrete reality.

Taking this possibility into consideration as well, we get even more possible combinations — infinitely many, in fact. For example, we can conceive of a universe that is bounded both spatially and temporally, yet which is infinitely divisible. And it can then be infinitely divisible in infinitely many different ways. For instance, it may be divisible in such a way that it has the same cardinality as the natural numbers, i.e. its set of “sub-worlds” is countably infinite, or it could be divisible with the same cardinality as the real numbers, meaning that it consists of uncountably many “sub-worlds”. And given that there is no largest cardinality, we could continue like this ad infinitum.

One way we could try to imagine the notional place of such small worlds in our physical world is by conceiving of them as in some sense existing “below” the Planck scale, each with their own Planck scale below which even more worlds exist, ad infinitum. Many more interesting examples of different kinds of combinations of the possibilities reviewed so far could be mentioned.

Another conceivable, yet supremely speculative, possibility worth contemplating is that the size of the universe is not set in stone, and that it may be up to us/the universe itself to determine whether it will be infinite, and what “kind” of infinity.

Lastly, it is also conceivable that the size of the universe, both in physical and axiological terms, cannot faithfully be conceived of with any concept available to us. So although the conceivable possibilities are infinite, it remains conceivable that none of them are “right” in any meaningful sense.

What Is the Case? — Infinite Uncertainty?

Unfortunately, we do not know whether the universe is infinite or not; or, more generally, which of the possibilities mentioned above that are true of our condition. And there are reasons to think that we will never know with great confidence. For even if we were to somehow encounter a boundary encapsulating our universe, or otherwise find strong reasons for believing in one, how could we possibly exclude that there might not be something beyond that boundary? (Not to mention that the universe might still be infinitely divisible even if bounded.) Or, alternatively, even if we thought we had good reasons to believe that our universe is infinite, how can we be sure that the limited data we base that conclusion on can be generalized to locations arbitrarily far away from us? (This is essentially the problem of induction.)

Yet even if we thought we did know whether the universe is infinite with great confidence, the situation would arguably not be much different. For if we accept the proposition that we should have more than infinitesimal credence in any empirical claim about the world, what is known as Cromwell’s rule (I have argued that this applies to all claims, not just [stereotypically] “empirical” claims), then, on our general expected value framework, it would seem that any claim about the reality of infinite value outcomes should always be taken seriously, regardless of our specific credences in specific physical and axiological models of the universe.

In fact, not only should the conceivable realizations of infinity reviewed above be taken seriously (at least to the extent that they imply outcomes with infinite (dis)value), but so should a seemingly even more outrageous notion, namely that infinite (dis)value may be at stake in any given action we take. However small a non-zero real-valued probability we assign such a claim — e.g. that the way you prepare your coffee tomorrow morning is going to impact an infinite amount of (dis)value — the expected value of getting the, indeed any, given action right remains infinite.

How should we act in light of this outrageous possibility?

Pascallian and Counter-Pascallian Claims

The problem, or perhaps our good fortune, is that, in most cases arguably, we do not seem to have reason to believe that one course of action is more likely to have an infinitely better outcome than another. For example, in the case of the morning coffee, we appear to have no more reason to believe that, say, making a strong cup of coffee will lead to infinitely more disvalue than making a mild one will, rather than it being the other way around. For such hypotheses, we seem able to construct an equal and oppositely directed counter-hypothesis.

Yet even if we concede that this is the case most of the time, what about situations where this is not the case? What about choices where we do have slightly better reasons to believe that one outcome will be infinitely better than another one?

This is difficult to address in the absence of any concrete hypotheses or scenarios, so I shall here consider the two specific cases, or classes of scenarios, where a plausible reason may be given in favor of thinking that one course of action will influence infinitely more value than another. One is the case of an eternal civilization: our actions may impact infinite (dis)value by impacting whether, and in what form, an eternal civilization will exist in our universe.

In relation to the (extremely unlikely) prospect of the existence of such a civilization, it seems that we could well find reasons to believe that we can impact an infinite amount of value. But the crucial question is: how? From the perspective of negative utilitarianism, it is far from clear what outcomes are most likely to be infinitely better than others. This is especially true in light of the other class of ways in which we may plausibly impact infinite value that I shall consider here, namely by impacting the creation of, or the unfolding of events in, parallel universes, which may eventually be infinitely numerous.

For not only could an eternal civilization that is the descendant of ours be better in “our universe” than another eternal civilization that may emerge in our place if we go extinct; it could also be better with respect to its effects on the creation of parallel universes, in which case it may be best for negative utilitarians to work to preserve our civilization, contrary to what is commonly considered the ultimate corollary of negative utilitarianism. Indeed, this could be the case even if no other civilization were to emerge instead of ours: if the impact our civilization will have on other universes results in less suffering than what would otherwise be created naturally. It is, of course, also likely that the opposite is the case: that the continuation of our civilization would be worse than another civilization or no civilization.

So in these cases where reasons pointing more in one way than another plausibly could be found, it is not clear which direction that would be. Except perhaps in the direction that we should do more research on this question: which actions are more likely to reduce infinitely more suffering than others? Indeed, from the point of view of a suffering-focused expected value framework, it would seem that this should be our highest priority.

Ignoring Small Credences?

In his paper on infinite ethics, Nick Bostrom argues that it is extraordinarily unlikely that we would end up with perfectly balanced credences when one choice might have infinitely better consequences than another:

This cancellation of probabilities would have to be perfectly accurate, down to the nineteenth decimal place and beyond. […]

It would seem almost miraculous if these motley factors, which could be subjectively correlated with infinite outcomes, always managed to conspire to cancel each other out without remainder. Yet if there is a remainder—if the balance of epistemic probability happens to tip ever so slightly in one direction—then the problem of fanaticism remains with undiminished force. Worse, its force might even be increased in this situation, for if what tilts the balance in favor of a seemingly fanatical course of action is the merest hunch rather than any solid conviction, then it is so much more counterintuitive to claim that we ought to pursue it in spite of any finite sacrifice doing so may entail. The “exact-cancellation” argument threatens to backfire catastrophically.

I do not happen to share Bostrom’s view, however. Apart from the aforementioned bounding of the influence of expected value thinking, there are also ways to avoid such apparent craziness of letting our actions rest on the slightest hunch from within the expected value framework: disregarding sufficiently low credences.

Bostrom is skeptical of this approach:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria. The postulation of an exception from these criteria for very low-likelihood events is, at the very least, theoretically ugly.

Yet Bostrom here seems to ignore that “the value in question” is infinite for every action, cf. the point that we should maintain some non-zero credence in any empirical claim, including the claim that any given action may effect an infinite amount of (dis)value.

So no action we can point toward is fundamentally different from any other in this respect. The only difference is just whether one action might be more likely to be infinitely better compared to any other action. And when it comes to such credences, I would argue that it is reasonable to ignore sufficiently small probabilities.

First, one could argue that, just as most models of physics break down beyond a certain range, it is reasonable to expect that our ability to discriminate between different credence levels breaks down when we reach a sufficiently fine scale. This is also well in line with the fact that it is generally difficult to put precise numbers on our credence levels with respect to specific claims. Thus, one could argue that we are typically way past the range of error of our intuitive credences when we reach the nineteenth decimal place.

This conclusion can also be reached via a rather different consideration: one can argue that our entire ontological and epistemological framework itself cannot be assumed credible with absolute certainty. Therefore, it would seem that our entire worldview, including this framework of assigning numerical values, or indeed any order at all, to our credences, should itself be assigned some credence of being wrong. And one can then argue, quite reasonably, that once we reach a level of credence in any claim that is lower than our level of credence in, say, the meaningfulness of ascribing credences in this way in the first place, this specific credence should generally be ignored, as it lies beyond what we consider the range of reliability of this framework in the first place.

In sum, I think it is fair to say that, when we only have a tiny credence that some action may be infinitely better than another, we should do more research and look for better reasons to act on rather than to act on these hunches. We can reasonably ignore exceptionally small credences in practice, as we already do every time we make a decision based on calculations of finite expected values — we then ignore the tiny credence we should have that the value of the outcomes in question is infinite.

Infinitarian Paralysis?

Another thing Bostrom treats in his paper is whether the existence of infinite value implies, on aggregative consequentialist views, that it makes no difference what we do. As he puts it:

Aggregative consequentialist theories are threatened by infinitarian paralysis: they seem to imply that if the world is canonically infinite then it is always ethically indifferent what we do. In particular, they would imply that it is ethically indifferent whether we cause another holocaust or prevent one from occurring. If any non-contradictory normative implication is a reductio ad absurdum, this one is.

To elaborate a bit: the reason it is supposed to be indifferent whether we cause another holocaust is that the net sum of value in the universe supposedly is the same either way: infinite.

It should be noted, though, that whether this really is a problem depends on how we define and calculate the “sum of value”. And the question is then whether we can define this in a meaningful way that avoids absurdities and provides us with a useful ethical framework we can act on.

A potential solution to this conundrum is to give up our attachment to cardinal arithmetic. In a way, this is obvious: if you have an infinite set and add finitely many elements to it, you still have “the same as before”, in terms of the cardinality of the set. Yet, in another sense, we of course do not get “the same as before”, in that the new infinite set is not identical to the one we had before. Therefore, if we insist that adding another holocaust to a universe that already contains infinitely many holocausts should make a difference, we are simply forced to abandon standard cardinal arithmetic. In its stead, we should arguably just take our requirement as an axiom: that adding any amount of value to an infinity of value does make a difference — that it does change the “sum of value”.

This may seem simplistic, and one may reasonably ask how this “sum of value” could be defined. A simple answer is that we could add up whatever (presumably) finite difference we make within the larger (hypothetically) infinite world, and to then consider that the relevant sum of value that should determine our actions, what has been referred to as “the causal approach” to this problem.

This approach has been met with various criticisms, one of them being that it leaves “the total sum of value” unchanged. As Bostrom puts it:

One consequence of the causal approach is that there are cases in which you ought to do something, and ought to not do something else, even though you are certain that neither action would have any effect at all on the total value of the world.

Yet it is worth noting that “the total value of the world” is not left unchanged on every definition of these terms; it just is on one particular definition, one that we arguably have good reason to consider implausible, since it implies that adding another holocaust makes no difference to the “total value of the world”. If we can help alleviate the extreme suffering of just a single being, while keeping all else equal, this being will hardly agree that “the total value of the world” was left unchanged by our actions, at least in the most plausible sense.

Imagine by analogy a hypothetical Earth identical to ours, with the two exceptions that 1) it has been inhabited by humans for an eternal and unalterable past, over which infinitely many holocausts have taken place, and 2) it has a finite future; the universe it inhabits will end peacefully in a hundred years. Now, if people on this Earth held an ethical theory that does not take its unalterable infinite past into account, and instead focuses on the finite future, including preventing holocausts from happening in their future, would this count against that theory in any way? I fail to see how it could, and yet this is essentially the same as taking the causal approach within an infinite universe, only phrased in purely temporal rather than spatio-temporal terms.

Another criticism that has been leveled against the causal approach is that we cannot rule out that our causal impact may in some sense be infinite, and therefore it is problematic to say that we should just measure the world’s value, and take action based on, whatever finite difference we make. Here is Bostrom again:

When a finite positive probability is assigned to scenarios in which it is possible for us to exert a causal effect on an infinite number of value-bearing locations […] then the expectation value of the causal changes that we can make is undefined. Paralysis will thus strike even when the domain of aggregation is restricted to our causal sphere of influence.

Yet these claims actually do not follow. First, it should again be noted that the situation Bostrom refers to here is in fact the situation we are always in: we should always assign a positive probability to the possibility that we may effect infinite (dis)value. Second, we should be clear that the scenario where we can impact an infinite amount of value, and where we aggregate over the realm we can influence, is fundamentally different from the scenario in which we aggregate over an infinite universe that contains an infinite amount of value that we cannot impact. To the extent there are threats of “infinitarian paralysis” in these two scenarios, they are not identical.

For example, Bostrom’s claim that “the expectation value of the causal changes that we can make is undefined” need not be true even on standard cardinal arithmetic, at least in the abstract (i.e. if we ignore Cromwell’s rule), in the scenario where we focus only on our own future light cone. For it could be that the scenarios in which we can “exert a causal effect on an infinite number of value-bearing locations” were all scenarios that nonetheless contained only finite (dis)value, or, on a dipolar axiology, only a finite amount of disvalue and an infinite amount of value. A concrete example of the latter could be a scenario where the abolitionist project outlined by David Pearce is completed in an eternal civilization after a finite amount of time.

Hence, it is not necessarily the case that “paralysis will strike even when the domain of aggregation is restricted to our causal sphere of influence”, apart from in the sense treated earlier, when we factor in Cromwell’s rule: how should we act given that all actions may effect infinite (dis)value? But again, this is a very different kind of “paralysis” than the one that appears to be Bostrom’s primary concern, cf. this excerpt from the abstract of his paper Infinite Ethics:

Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity.

Indeed, one can argue that the “Cromwell paralysis” in a sense negates this latter paralysis, as it implies that it may not be true that we can affect only a finite amount of good or bad, and, more generally, that we should assign a non-zero probability to the claim that we can optimize the value of the universe everywhere throughout, including in those corners that seem theoretically inaccessible.

Adding Always Makes a Difference

As for the infinitarian paralysis supposed to threaten the causal approach in the absence of the “Cromwell paralysis” — how to compare the outcomes we can impact that contain infinite amounts of value? — it seems that we can readily identify reasonable consequentialist principles to act by that should at least allow us to compare some actions and outcomes against each other, including, perhaps, the most relevant ones.

One such principle is the one alluded to in the previous section: that adding something of (dis)value always makes a difference, even if the notional set we are adding it to contains infinitely many similar elements already. In terms of an axiology that holds the amount of suffering in the world to be the chief measure of value, this principle would hold that adding/failing to prevent an instance of suffering always makes for a less valuable outcome, provided that other things are equal, which they of course never quite are in the real world, yet they often are in expectation.

The following abstract example makes, I believe, a strong case for favoring such a measure of (dis)value over the cardinal sum of the units of (dis)value. As I formulate this thought experiment, this unit will, in accordance with my own view, be instances of intense suffering in the universe, yet the point applies generally:

Imagine that we have a universe with a countably infinite amount of instances of intense suffering. We may visualize this universe as a unit ball. Now imagine that we perform an act in this universe that leaves the original universe unchanged, yet creates a new universe identical to the first one. The result is a new universe full of suffering. Imagine next that we perform this same act in a world where nothing exists. The result is exactly the same: the creation of a new universe full of suffering, in the exact same amount. In both cases, we have added exactly the same ball of infinite suffering. Yet on standard cardinal arithmetic, the difference the act makes in terms of the sum of instances of suffering is not the same in the two cases. In the first case, the total sum is the same, namely countably infinite, while there is an infinite difference in the second case: from zero to infinity. If we only count the difference added, however— the “delta universe”, so to speak— the acts are equally disvaluable in the two cases. The latter method of evaluating the (dis)value of the act seems far more plausible than does evaluation based on the cardinal sum of the units of (dis)value in the universe. It is, after all, the exact same act.

This is not an idle thought experiment. As noted above, impacting the creation of new universes is one of the ways in which we may plausibly be able to influence an infinite amount of (dis)value, arguably even the most plausible one. Admittedly, it does rest on certain debatable assumptions about physics, yet these assumptions are arguably likely than is the possibility of the existence of an eternal civilization. For even disregarding specific civilization hostile facts about the universe (e.g. the end of stars and a rapid expansion of space that is thought to eventually rip ordinary matter apart), we should, for each year in the future, assign a probability strictly lower than 1 that civilization will go extinct that year, which means that the probability of extinction will be arbitrarily close to 1 within a finite amount of time.

In other words, an eternal civilization seems immensely unlikely, even if the universe were to stay perfectly life-friendly forever. The same does not seem true of the prospect of influencing the generation of new universes. As far as I can tell, the latter is in a ballpark of its own when it comes to plausible ways in which we may be able to effect infinite (dis)value, which is not to say that universe creation is more likely than not to become possible, but merely that it seems significantly more likely than other ways we know of in which we could effect infinite (dis)value (though, again, our knowledge of “such ways” is admittedly limited at this point, and something we should probably do more research on). Not only that, it is also something that could be relevant in the relatively near future, and more disvalue could depend on a single such near-future act of universe creation than what is found, intrinsically at least, in the entire future of our civilization. Infinitely more, in fact. Thus, one could argue that it is not our impact on the quality of life of future generations in our civilization that matters most in expectation, but our impact on the generation of universes by our civilization.

Universe Anti-Natalism: The Most Important Cause?

It is therefore not unthinkable that this should be the main question of concern for consequentialists: how does this impact the creation of new universes? Or, similarly, that trying to impact future universe generation should be the main cause for aspiring effective altruists. And I would argue that the form this cause should take is universe anti-natalism: avoiding, or minimizing, the creation of new universes.

There are countless ways to argue for this. As Brian Tomasik notes, creating a new universe that in turn gives rise to infinitely many universes “would cause infinitely many additional instances of the Holocaust, infinitely many acts of torture, and worse. Creating lab universes would be very bad according to several ethical views.”

Such universe creation would obviously be wrong from the stance of negative utilitarianism, as well as from similar suffering-focused views. It would also be wrong according to what is known as The Asymmetry in population ethics: that creating beings with bad lives is wrong, and something we have an obligation to not do, while failing to create happy lives is not wrong, and we have no obligation to bring such lives into being. A much weaker, and even less controversial, stance on procreative ethics could also be used: do not create lives with infinite amounts of torture.

Indeed, how, we must ask ourselves, could a benevolent being justify bringing so much suffering into being? What could possibly justify the Holocaust, let alone infinitely many of them? What would be our answer to the screams of “why” to the heavens from the torture victims?

Universe anti-natalism should also be taken seriously by classical utilitarians, as a case can be made that the universe is likely to end up being net negative in terms of algo-hedonic tone. For instance, it may well be that most sentient life that will ever exist will find itself in a state of natural carnage, as civilizations may be rare even on planets where sentient life has emerged, and because even where civilizations have emerged, it may be that they are unlikely to be sustainable, perhaps overwhelmingly so, implying that most sentient life might be expected to exist at the stage it has existed on for the entire history of sentient life on Earth. A stage where sentient beings are born in great numbers only for the vast majority of them to die shortly thereafter, for instance due to starvation or by being eaten alive, which is most likely a net negative condition, even by wishful classical utilitarian standards. Simon Knutsson’s essay How Could an Empty World Be Better than a Populated One? is worth reading in this context, and of course applies to “no world” as well.

And if one takes a so-called meta-normative approach, where one decides by averaging over various ethical theories, one could argue that the case against universe creation becomes significantly stronger; if one for instance combines an unclear or negative-leaning verdict from a classical utilitarian stance with The Asymmetry and Kantian ethics.

As for those who hold anti-natalism at the core of their values, one could argue that they should make universe anti-natalism their main focus over human anti-natalism (which may not even reduce suffering in expectation), or at the very least expand their focus to also encompass this apparently esoteric position. Not only because the scale is potentially unsurpassable in terms of what prevents the most births, but it may also be easier, both because wishful thinking about “those horrors will not befall my creation” could be more difficult to maintain in the face of horrors that we know have occurred in the past, and because we do not seem as attached and adapted, biologically and culturally, to creating new universes as we are to creating new children. And just as anti-natalists argue with respect to human life, being against the creation of new universes need not be incompatible with a responsible sustainment of life in the one that does exist. This might also be a compromise solution that many people would be able to agree on.

Are Other Things Equal?

The discussion above assumes that the generation of a new universe would leave all else equal, or at least leave all else merely “finitely altered”. But how can we be sure that the generation of a new universe would not in fact prevent the emergence of another? Or perhaps even prevent many infinite universes from emerging? We can’t. Yet we do not appear to have any reason for believing that this is the case. As noted above, all else will often be equal in expectation, and that also seems true in this case. We can make counter-Pascallian hypotheses in both directions, and in the absence of evidence for any of them, we appear to have most reason to believe that the creation of a new universe results, in the aggregate, in a net addition of a new universe. But this could of course be wrong.

For instance, artificial universe creation would be dwarfed by the natural universe generation that happens all the time according to inflationary models, so could it not be that the generation of a new universe might prevent some of these natural ones from occurring? I doubt that there are compelling reasons for believing this, but natural universe generation does raise the interesting question of whether we might be able to reduce the rate of this generation. Brian Tomasik has discussed the idea, yet it remains an open, and virtually unexplored, research question. One that could dominate all other considerations.

It may be objected that considerations of identical, or virtually identical, copies of ourselves throughout the universe have been omitted in this discussion, yet as far as I can tell, including such considerations would not change the discussion in a fundamental way. For if universe generation is the main cause and most consequential action to focus on for us, more important even than the intrinsic importance of the entire future of our civilization, then this presumably applies to each copy of ourselves as well. Yet I am curious to hear arguments that suggest otherwise.

A final miscellaneous point I should like to add here is that the points made above may apply even if the universe is, and only ever will be, finite, as the generation of a new finite pocket universe in that case still could bring about far more suffering than what is found in the future light cone of our own universe.

In conclusion, the subjects of the potential to effect infinite (dis)value in general, and of impacting universe generation in particular, are extremely neglected at this point, and a case can be made that more research into such possibilities should be a top priority. It seems conceivable that a question related to such a prospect — e.g. should we create more universes? — will one day be the main ethical question facing our civilization, perhaps even one we will be forced to decide upon in a not too distant future. Given the potentially enormous stakes, it seems worth being prepared for such scenarios — including knowing more about their nature, how likely they are, and how to best act in them — even if they are unlikely.

Induction Is All We Got

In this piece I shall defend what may appear an unusual thesis, namely that all reasoning is ultimately based on induction, and hence that induction is the only way in which we ever know anything. By induction, I here mean what seems right in light of the doubtable data/experience we have accumulated so far. In everything from logic and mathematics to philosophy and psychology, this is invariably how we evaluate what is true. Or so I shall argue.

How can we be sure that the patterns we have reliably observed in the world so far will also exist in other times or places? How can we justify the assumed uniformity of the world that induction seems to rest upon? How can we trust induction when it cannot be deductively justified? This is the problem of induction in a nutshell.

What is interesting, however, and seemingly universally missed, is that exactly the same problem is staring us in the face when it comes to deduction. Logical deductions are also part of the world, and hence to assume that they will be valid in all times and in all realms is therefore also to assume that the world is uniform in certain ways. It is the exact same assumption, so why is it considered problematic in the case of induction but not in the case of deduction? What is the source of this discrimination?

The answer, I think, is that it just seems true that deduction is universal, and that the opposite claim — that logic is not universal — seems to make no sense. I certainly share this impression, but this does not render deduction wholly undoubtable. We may reasonably have confidence in the statement that logical deductions are universal, but we should be clear that the basis of this belief is itself merely that it seems reasonable to suppose this given that our minds apparently cannot make sense of anything else. More than that, we should also be clear that we then in fact do accept the uniformity of the world (or perhaps assign a high probability to this claim being true), and that we do it on the basis that it just seems reasonable.

Another aspect of the problem of induction is that induction merely is assumed to be valid, and that attempts at justifying it always seem circular. Yet again, how does deduction compare? How do we justify deduction? With deductive arguments? That would be circular as well. With brute assumptions? If so, why is it more problematic to assume the validity of induction?

There really is no fundamental distinction. We accept both induction and deduction because they seem right. Deductions seem obviously reasonable and valid while inductive inferences seem fairly reasonable and probably valid. The only difference, it seems, is the degree of obviousness, a difference I shall try to explain below.

Beliefs: All in Memory

One way to realize the conclusion sketched out above is by recalling the fact that all our beliefs reside in memory. And we know that 1) our memory consists of information we have gathered over time, and 2) our memories can be unreliable. There is nothing logically problematic about this; indeed, this is common knowledge. Yet it implies something rather significant, namely that all our beliefs, including those about logic, are doubtable, and that all our beliefs are a matter of what seems right in light of the doubtable data/experience we have accumulated so far.

This applies to all knowledge, whether inductively or deductively inferred (as we shall see, the latter is a subset of the former). Mathematical proofs, for instance, are often claimed to be certain knowledge, yet our knowledge of mathematical proofs is also contained in memory. And since all mathematical proofs we know of are stored in memory, and since memory is fallible, it follows that our belief in any mathematical proof we hold to be valid is, in fact, fallible.

The idea that mathematical knowledge is certain and rests only on deduction is indeed ridiculous. Take for instance the proof of Fermat’s Last Theorem: a small fraction of professional mathematicians actually fully understand this proof, yet in my experience, virtually all mathematicians will say that we know that Fermat’s Last Theorem is true. And this is probably a highly reasonable belief, but let us be clear about how we know it: by trusting the expertise of other mathematicians. And such trust is transparently based on induction; it is not based on deduction. More than that, we know, inductively, that this inductively based trust is fallible.

A famous example would be Alfred Kempe’s proof of the four-color theorem, presented in 1879, which was widely accepted until it was shown to be incorrect in 1890. Another example is Gauss’ proof of the fundamental theorem of algebra, a proof Gauss himself obviously held to be valid, as did many other mathematicians, yet it was not completed until more than a hundred years after Gauss first published it.

So our mathematical knowledge clearly relies strongly on induction, in that we trust others. Indeed, I would argue that, in practice, a majority of the mathematical knowledge any mathematician knows is known based on such trust in others rather than their own deductions. Yet to think that we rely on induction merely when it comes to trusting others in the pursuit of what we call deductive knowledge is to miss the point. For the point is that this applies to all mathematical knowledge, including when we have made all the deductions ourselves. There is no fundamental distinction between when others have made the deductions and when we have made them ourselves. In both cases, we trust conclusions made by fallible minds, stored in a fallible memory.

This of course isn’t to say that such trust is unreasonable, yet the nature of this trust should not be missed: it rests on induction. There is no deductive argument that proves our memory to be reliable. Rather, we merely assume the reliability of memory, and 1) this is an assumption that we cannot not make, 2) it is an assumption that all deduction, indeed all knowledge in general, rests upon, and 3), to repeat the point made above, this assumption rests on induction.

Let me explain and justify all these claims in turn. To start with 3), to assume that our memories in this present moment are valid rests on the assumption that the information we have stored in memory earlier still applies. This projected extension of the limited information we know is the core of induction. As for 2), it is trivial that all knowledge, including that derived from deduction, rests on the reliability of memory, since that is where all our knowledge is stored. So to say that we know anything about anything is to assume the validity of our memory — or at least the validity of some aspects of it; more on this below. Lastly, 1), the assumption that we can trust our memory is an assumption we cannot not make because our memory is the position from which we see the world. To even doubt this assumption requires trusting it, since one must then at least trust that one doubts.

Yet we know our memory to be profoundly unreliable, don’t we?”

Yes, but it is not entirely so, and that is the point. For in order to even discover that our memory is not (entirely) reliable, we must assume that at least some aspects of our memory are — at the very least those aspects of it that hint that our memory is not entirely reliable. In other words, the discovery of the imperfect reliability of memory rests on its partial reliability.

So believing that we cannot trust any aspect of our own memory is nothing less than logically impossible, since such a belief — indeed any belief — itself resides in memory, and thus rests on its (at least partial) reliability. And given this status of logical impossibility, the belief that we cannot trust any aspect of our memory must be considered false with at least the same certainty that we place in other logical conclusions. Indeed, if possible, it should be granted even higher status, since all other beliefs, including purely logical ones, rest upon its falsity, namely that we can trust (at least some aspects of) our memory. That’s right: all deductive knowledge rests on the reliability of memory, and this reliability rests on the validity of induction [again, this was 3) above]. Conclusion: Deductive knowledge rests on the validity of induction.

Indeed, the reason we trust deduction is ultimately inductive. For deductions are also, I would argue, experiments that we run in our heads, albeit experiments that reliably produce the same result. We therefore inductively conclude that they will keep on doing the same. What we usually consider matters of induction — for instance, we have observed a thousand white swans; should we expect the next swan to be white given all that we know about the world, including the fact that there are other birds who are not white? — is just different in that we are in a realm where our information seems a lot more incomplete. It is ultimately of the same form.

This also explains the difference in the status of certainty we ascribe to deduction and induction mentioned above: deduction seems obviously reasonable and valid because the experiment goes right every time, as far as we can tell, while (what we usually call) induction seems fairly reasonable and probably valid because it works well most of the time.

So the reason, I believe, that Hume found deduction more valid than induction, and found induction so much more problematic, was, ironically, because induction recommends the former more strongly. Hume’s objection to induction is really an adventure in self-contradiction — in many ways. For instance, the great man claimed, based on his own brain’s reasoning, that a universal rule cannot be derived from particular instances, yet what is this if not itself a universal rule derived from particular instances (of reasoning in his brain)? What is this if not a glaring self-contradiction?

Try as you might, in the realm of belief, there simply is no denying the validity of induction. Again, in order to even express doubts about the validity of induction, one must inescapably rest on what one is trying to doubt, as one then inductively assumes that doubt is a meaningful concept in this moment (it has been so far), that the others whom one expresses one’s doubts to will understand a word of what one says (they have so far), that there still is a problem of induction (it seems there has been so far), etc. Indeed, all beliefs rest on induction, as they rest on the assumption that the justification we have acquired for them in the past still applies in the present, including belief in notions of past, present, and future in the first place, not to mention (tacit) belief in there being such a thing as logic, truth, and falsehood — the ideas that constitute the entire framework in which discussions about induction occur.

So what justifies induction, then?”

Nothing. In order to even enter the realm of trying to justify something, we have already accepted induction. In asking for a justification for induction, we ask from a position of unacknowledged acceptance of it. Indeed, what justifies the belief that there is a need to justify induction — a belief that itself rests on induction? Nothing. If we believe anything at all, we are already way past the point of accepting induction, knowingly or not. So to the extent we admit of having any beliefs at all, we admit of the validity of induction. We are fundamentally confused about where in our hierarchy of beliefs induction enters the picture. The answer is: underneath it all.

Knowing Good from Bad Induction

To say that reliance on induction is inevitable is obviously not to say that all inductive inferences are valid. So how do we know valid inductive inferences from invalid ones? Via induction, of course.

In a nutshell, we (ideally) assess the truth of a statement in light of all the information we have in our memory — the totality of what we know. This is all we got, and hence all we ever can evaluate truth claims based on. The more the doubtable data points we have accumulated point our beliefs in a certain direction, the stronger those beliefs are, or at least should be.

For example, the claim that the sun will rise tomorrow is a claim that we believe because it fits with, indeed is predicted by, everything we know, from the totality of humanity’s knowledge of physics and astronomy to our everyday experience.

In the same way, we can deem inductive inferences false. For instance, the claim that the sun will always keep rising because it has done that so far is obviously not true, and the way we know this is again via induction: we know of underlying physical principles that “govern” the physical macro patterns that are the dynamics of stars and planets, and these principles, along with astronomical observations of stars elsewhere, imply that the age of our solar system will indeed be finite. That is what all the data points to.

The commonly cited examples of “hard problems” for our (inevitably) inductive reasoning are all problems that arise from paying attention to a too narrow channel of information. For instance, when we say that every swan we have ever seen is white, and therefore all swans must be white, this is simply a bad inference that fails to keep other relevant facts in view, such as the size of our sample, the size of the Earth, and the fact that there are other birds who have a different color, a fact that is relevant when we keep in mind the additional fact that there is a high degree of similarity in patterns across species.

But what if we did not know about these additional facts? Then the inference seems reasonable.”

First, it should be noted that if we were in that position, we would be ignorant to a degree that is hard for us to imagine as creatures who know a lot. Second, if we were in such a position of knowing virtually nothing, we indeed should be very careful to make general conclusions about the world with confidence. If you have seen a thousand swans, and they have all been white, it seems reasonable to expect that the next one you see will be white as well, but it by no means implies that all swans are.

But couldn’t our inductive reasoning be wrong, even when we know a lot and we consider the totality of what we know?”

This is possible, yet, as we know inductively, e.g. from statistics, the more we know, the less likely such mistakes are. It is also worth noting how we know of the possibility of the fallibility of inductive inferences in the first place, namely via induction. We know that apparently solid patterns can break because we have witnessed it before. Nations that seemed strong suddenly fell, people who were right about many things were suddenly wrong, proofs that seemed valid were shown not to be, etc. We have observed this meta pattern of patterns sometimes breaking when we don’t expect it, which has taught us, inductively, to be more open-minded about the possibility of the breaking of even apparently solid patterns. It is always induction that teaches us epistemic modesty.

So it is due to inductive reasoning, not in spite of it, that we seem to have some reason to be agnostic concerning the generality of patterns we consider general, such as whether the cosmos looks the same everywhere across time and space — a question that is currently debated among physicists and cosmologists. What we can say here seems much like what we could say as the ignorant swan observers we imagined ourselves to be above: it seems reasonable that the time and space in the proximity of that which we have observed to unfold in certain law-like ways will also unfold in such ways, but we cannot confidently claim that this applies to all time and space.

The Source of the Problem: A Narrow and Confused View of Knowledge

As mentioned above, a narrow focus on certain data and beliefs about the world, as opposed to a focus on the totality of what we know, is the source of many problems in epistemology, including Goodman’s new riddle of induction and the traditional problem of induction itself. In the case of Goodman’s new riddle of induction, the problem is, in a nutshell, that we have no reason to believe that properties such as grue and bleen exist in light of all that we know about physics, as their existence would essentially require a change in the laws of physics that we have no reason to believe possible. So it is not the case that these two hypothetical properties constitute a deep problem for induction; the suggestion that things could be grue or bleen merely constitutes an extremely unlikely hypothesis about the world.

As for the problem of induction itself, a narrow focus is also to blame. Hume made the following claim: “That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.” Yet that this proposition “implies no more contradiction” is simply wrong, since it contradicts pretty much everything we know in fields such as astronomy and physics. And if you can contradict all this, why not also contradict history and claim that there never was a guy named David Hume, and that nobody has ever raised any so-called problem of induction? After all, this is certainly “no less intelligible” or plausible than the claim that the sun will not rise tomorrow. Or to take a more traditional inductive problem: why believe that there is any problem of induction in this moment or the next one just because it seems that there has been in the past? Indeed, why not contradict logical conclusions themselves?

This is surely what Hume means: the claim that the sun will not rise tomorrow seems to imply no logical contradiction, yet this dichotomy between logical and physical knowledge is, I would argue, ultimately misguided. First, in ontological terms, there is no evidence for the existence of some separate logico-mathematical world apart from the physical one — mathematical truths are found in and by the human mind, and given that the human mind is physical, it follows that mathematical truths are found in and by the physical. Second, as mentioned above, in epistemological terms, both what we consider mathematical and physical forms of knowledge ultimately share the same inductive basis — they are stored in our memory based on what we have experienced — which is yet another reason not to strongly privilege one over another, as Hume does. In sum, there is no justification for Hume’s narrow focus on, and privileging of, deductive reasoning and knowledge — his belief that only (what we categorize as) logical truths are valid. Again, deductively based beliefs, like all other beliefs, also rest on induction in the first place.

How We Know Things: It Just Seems That Way

How do we know that we are conscious, or that two plus two equals four? The answer, I would argue, is simply that it appears clear from our experience that this is the case. We ultimately have no deeper justification than this.

And this answer actually does not change when we ask more complicated questions, such as how we know that the Earth is round, or what the name of the current president of the United States is. We know because of experiences that have shaped, and in significant ways are now part of, our present experience from which it just seems obvious what the answer is. We may be able to express a long chain of reasons that compel us to hold the belief we hold, yet at the bottom of this elaborate chain, all we ultimately have is a set of conscious impressions of belief. Or doubt, for that matter, if we don’t happen to know the answer, but the basic mechanics are the same: we weigh our experience and read off from it what our state of belief — or doubt — is; itself a fact about the world.

Every chain of explanations must end somewhere, and, when it comes to our knowledge, the rock bottom of this chain is found in our direct conscious sensations. Ultimately, we do not have a deeper justification for what we know than this: it seems that way from our conscious impressions. This form of foundationalism is, I submit, the solution to the so-called Münschhausen trilemma concerning how we justify what we know.

This is not to say that we cannot question and correct our impressions. We clearly can, as the correction of illusions and biases exemplify, yet our knowledge of such corrections is itself a matter of conscious impressions, for instance impressions that inform us about statistics, which help us correct wrong ones. The ultimate justification for our beliefs is still our experience. And this is indeed how we improve our knowledge of the world: new impressions help update and correct old ones, which in turn makes us form better ones, i.e. impressions that represent the world more accurately.

That our knowledge at bottom rests on experience is also not to say that our knowledge rests on a basis of mere assumptions. A good analogy, I believe, is our knowledge of fundamental physical constants, which are also in some sense primitive, in that they are measured rather than derived from something else. We have no deeper justification for believing what the values of these constants are than our measurements, yet this is clearly distinct from merely assuming these values. Similarly, I would argue that we observe — “measure”, if you will — the fact that we are conscious and that two plus two is four; we do not merely assume this (there is clearly a difference: to arbitrarily assume your friend is in the same room as you is quite distinct from seeing that your friend is in the same room as you). And as in the case of the measurement of fundamental physical constants, direct measurements in consciousness can of course be erroneous, yet when we consistently measure the same result time and time again by running the same experiment, we do seem reasonably justified — inductively, as always — in believing the validity of the measurement.

That our conscious impressions are what our beliefs ultimately rest upon may seem somewhat weak and unsatisfying, yet only if we fail to keep in mind that conscious impressions are in fact all we ever deal in when it comes to our knowledge. This includes the sense that conscious impressions constitute a poor foundation for knowledge: this sense is itself just another appearance in consciousness, resting on the exact foundation it purportedly doubts. And if a statement like “I believe this because it seems that way in light of what I experience” sounds like a weak foundation for knowledge, this, I believe, is mainly because we usually only use this kind of language when it comes to matters we are uncertain about, such as immediate unexamined impressions. In truth, however, this “it is what seems true in light of my experience” is in fact what we always do, regardless of our degree of certainty. One’s knowledge of textbook information is also “just” another conscious impression.

Phenomenological Positivism: Knowledge Built from a Phenomenological Palette

What we do when we model the world is to represent its features with the different colors of the palette of consciousness. Indeed, this is all we ever can do: consciousness is all we ever know, and hence its colors are all we ever can model and represent the world with at the level of our knowledge.

One can fairly consider this account of knowledge a positivist one, although one that is of a distinctly phenomenological and common sensical sort. For given that consciousness is all we ever know, it is obvious that all facts we know are known via a composition of the various states of consciousness available to us, including the set of facts about the “external world” that can be detected and represented with our conscious minds (and things that fall outside of what we can detect with our conscious minds are obviously the things we cannot know).

So although science is often considered beyond unification, and that universal features shared by all sciences seem to have been deemed non-existent by common consensus, it remains trivially true, to me at least, that all forms of knowledge, whether we deem them “scientific” or not, are known in consciousness, and hence that all our knowledge is at least united by this common feature. In a nutshell, our knowledge of the world is a matter of phenomenological models that appear consistent with phenomenologically observed data. And, again, this “appearing consistent with” — or “seeming right” in light of — all the data is, as a mater of justification, ultimately all we have. This, I submit, does not only apply to science in its usual narrow conception, but to reason in general. For instance, this is also how we (ideally) assess the plausibility of different views in, say, ethics and epistemology: by weighing the data, including arguments and counter-arguments, and assessing what seems reasonable in light of it all (and here it is worth being mindful of the fact that genes seem to play a significant role when it comes to what “seems reasonable”, also in the realm of ethics and politics, and hence to be intensely skeptical of the “immediate seemings” of one’s crude intuitions, and to probe them deeper).

In this way, this account of knowledge and reason actually breaks down the usual empiricism-rationalism dichotomy: all processes of thought and reasoning are also phenomenally observed sensations, and hence not something different from “observations.” They are indeed themselves impressions — more doubtable data — that influence our view and assessment of the world. Rationalism, as in logical reasoning, is just another mode of empiricism and experiment, one that has strengths and weaknesses like all other “experimental devices”.

It is worth noting that this account of our knowledge, and reason more generally, does not amount to mere Bayesianism in any usual sense. For while Bayesian updating surely shares this general feature of being a matter of updating and estimating degrees of certainty based on all available evidence, and while much of our own updating is overtly Bayesian — for instance, many of us have made updates in our views based on formal Bayesian calculations — there is much more to our knowledge and our updating of our beliefs than mere formal calculations with numerical probabilities. Not all available evidence is represented, or even representable, as numerical probabilities; for a person who does not know what it is like to experience, say, sounds and sights, no amount of formal Bayesian calculations is going to shed light on the matter. One must experience these things to know what they are like. Bayesian updating is merely the formal special case of the more general inductive method of estimating what seems right in light of the doubtable data/experience we have accumulated so far.

Do We Have Faith in Induction/Science?

A notion one often hears from religious scholars is that faith in religious claims is no less reasonable than belief in the facts we know from the sciences, since the latter ultimately rest on faith as well: they rest on faith in reason. Yet is this true? In a nutshell, no.

Science is the process of learning about the world by observing it. Therefore, one could argue that science rests on the assumption that we can learn about the world by observing it, which is in fact functionally equivalent to the assumption that induction is valid, since learning about the world by observing it requires that patterns that existed in the past still exist today and in the future — the core of induction.

Yet one need not even make this assumption explicitly, since the assumption that we can learn about the world by observing it is one that we cannot not make. In order to even express the belief that one cannot learn from experience of the world, one has already learned from such experience, the experience of one’s own belief. (This inevitability makes it just like the assumption that at least some aspects of memory can be trusted, which is in fact also an equivalent proposition: that we can learn about the world by observing it requires that at least some aspects of our memory is reliable, and for our memory to contain reliable information about the world, it must be possible to learn about the world by observing it.)

Thus, we all implicitly “assume” that we can learn about the world by observing it, whether we are religious or not, and hence making this inescapable “assumption” cannot meaningfully be called a leap of faith. Rather, it is an inescapable fact (one that all other facts rest upon), as there is no intelligible alternative (indeed, the very possibility of intelligibility of any kind rests on learning from observation in the first place, as claims cannot be deemed (un)intelligible if they cannot be learned in the first place). This makes it wholly unlike actual leaps of faith, i.e. believing in things, such as supernatural events, without supporting evidence. The latter is by no means inescapable.

Indeed, claims about some things being a matter of faith only make sense in a context where we have already made “the leap of faith” of accepting that we can learn about the world by observing it, since whether a claim rests on faith is a matter of whether there exists evidence to support it. And all evaluations of evidence must take place in a realm where we have already assumed the relevance of evidence for propositions about the world — i.e. already made the inevitable “assumption” whose status was in question. In other words, in order to assess whether or not something is a matter of faith, we must “assume” the relevance of evidence in the first place; we must accept that we should go with what seems right in light of the doubtable data/experience we have accumulated so far.

One may object that science rests on much more specific assumptions than merely the possibility of learning about the world by observing it, yet, ideally, this should not be the case. For while it is true that specific methodologies have emerged in the sciences over time, the process of science most generally — that is, learning about the world by observing it — is not committed to any specific methodology in principle, which makes all specific methodologies open for revision. If certain methods are shown to be seriously flawed, as has happened before, these methods should be discarded or updated. And this is indeed how the methods we see employed in the sciences today have developed. Placebo-controlled studies and double blind experiments were not assumed by faith to be the way to “do science” from the outset. Rather, these and other sensible methods of discovery were themselves discovered over years of trial-and-error.

Thus, what works best, both when it comes to theories and methods, is itself to be settled with observation and examination, not faith. Based on the fundamental principle of learning from observation, science continually refines its own method. In this way, the process of observing and learning about the world is a self-correcting and self-optimizing one.

Doubting the Apparently Undoubtable

As noted earlier, inductive reasoning has shown us that we have good reason to maintain humility about our beliefs. We know that our memory is fallible. As mentioned above, even mathematical proofs held to be valid by many have turned out to be wrong, and this risk of fallibility not only pertains to the logical deductions made by others, but also to those made by ourselves — the appearance that a logical deduction is valid can turn out to be wrong upon closer examination. It has happened before.

So it seems that we should maintain at least some degree of doubt even when it comes to logical deductions that we seem to have reason to be completely certain of, which is not to say that it is reasonable to have more than a negligible degree of such doubt in most cases.

Yet the above-mentioned doubts merely amount to epistemological doubts, doubts about whether our faculties of reasoning accurately track the deeper patterns of the world. We could also have doubts of a deeper ontological nature, namely about the stability of those patterns themselves. For instance, will the laws of physics as we know them apply tomorrow? What about logico-mathematical truths?

Do such questions even make sense? After all, don’t questions concerning what happens tomorrow, and hence rest on the concept of time, already presuppose some basic laws of physics, or at least some elements from the physical framework as we know it? And doesn’t the meaningfulness of doubts concerning whether our logical framework will at all apply tomorrow also itself rest on the validity of that very framework, e.g. that things are either the case or not the case? After all, all talk of whether something applies or not — is true or not — already takes place in the realm of, and therefore presupposes the sensibility of, logical thought. So what does it even mean to say that this framework might no longer apply when the very coherence of “applying” rests on this framework? It seems self-refuting.

It does. Yet even so, we do seem to have reason to maintain at least some degree of humility about these propositions, one reason being the aforementioned “epistemological doubt” — we know our memory is not entirely reliable, and hence we should admit of the possibility that deductions of the sort made above have a small risk of being wrong. Indeed, this argument for the sensibility of (at least a small amount of) doubt seems to pertain to all arguments, including itself (and also the most undoubtable of ethical positions we may hold).

Second, certain drastic changes, such as changes in certain otherwise lawful physical patterns, do not seem inconceivable; indeed, some cosmological theories predict such changes. Therefore, the claim that at least some apparently solid facts about the world may suddenly change cannot be ruled out deductively, it seems. Might the very fabric of existence suddenly change in radically unexpected ways, thereby perhaps altering physical and mathematical truths as we know them? (Again, on a physicalist view of the world, physics and mathematics cannot be separated, which means that what we may call the uniformity of mathematics depends on at least some degree of uniformity of [what we consider] physics). It seems extremely unlikely, but we cannot exclude it with total certainty.

Lastly, it also seems conceivable that we could have new experiences — on a sufficiently exotic drug, for instance — that would suddenly make the so far inconceivable seem conceivable, and thereby make apparently valid deductions and brute facts appear invalid and untrue. Again, the only justification we have for believing what we believe is, ultimately, that “it just seems true.” And while it may be inconceivable to imagine, say, that mathematical truths could suddenly change, it does not, strangely enough, seem inconceivable that such an apparently inconceivable claim could seem conceivable in a radically different state of mind. And if it can seem right in another state of mind, how can we maintain absolute certainty that that state of mind is more wrong than our own present one is? It seems we can’t.

In sum, it seems that even when it comes to the most outrageous of claims, claims we cannot even make any sense of, some small degree of uncertainty about their status still seems in place, although the appropriate degree may be very small indeed. Everything can reasonably be doubted to some degree. Or so it seems.

[A small side note: In terms of practical implications, this small window of doubt might help one soften up painful certainties, such as certainty in fatalism. For while it might be tempting to some to think about the world as being an unalterable multi-dimensional structure that we cannot change in any strong sense, one must admit that this view could in fact be wrong, and hence that trying to change the world for the better indeed might have some chance of making a difference even in a very strong sense. Either way, it seems like one does not lose anything by trying one’s best.]

Inconsistent Skepticism

Our conscious experience seems to represent a world “out there” that is independent of our own minds. But how do we know this representation is at all accurate? How do we know the truth is not rather some well-known skeptical conjecture — for instance, that our experience is all a dream or a computer simulation?

I think there is a lot to be said against skepticism of this sort, the most important one being that it is inconsistent. Knowledge of dreams and simulations is itself found in our experience, and hence to consistently doubt the validity of our experience requires us to doubt the validity — i.e. the meaningfulness and sensibility — of these notions themselves. Yet in our entertainment of skepticism of this sort, these notions themselves are somehow exempt from skepticism. They stand beyond scrutiny, while virtually all other appearances we know of, and all other beliefs we hold, do not.

What can justify such inconsistent skepticism? Nothing, as far as I can see, especially given that claims of the sort that all we experience could be a dream or a computer simulation seem extremely dubious to say the least. Take the claim that our entire experience is a dream. Does anything we know of actually suggest this in the slightest? Not to my knowledge. The state of our consciousness in our dreams is radically different from our waking state. Indeed, within a dream it is even possible to realize that one is dreaming, and to explore one’s consciousness in that state, as many of us have tried; something similar never happens in our waking state. The only thing that remotely hints that our experience could be a dream is an argument from analogy: Given that our experiences in dreams can seem to convincingly represent the world, yet still turn out to be mere dreams, could our waking state that seems to convincingly represent the world not be a mere dream too?

If dreams were anything like our waking state, this would indeed seem reasonable. Yet the truth is that they are not.

This “the appearance is different” fact may seem to say precious little, yet only if we miss the significance of differences in appearances. By analogy, imagine that you are on holiday in Istanbul. You remember planning the journey, traveling there, being there for the past five days, and presently you are watching the Sultan Ahmed Mosque while feeling the unbearable summer heat. Now, how do you know that you are not, in fact, in Oslo? Well, just about every single appearance in your consciousness suggests that you are not, and hence you are not in much doubt. And reasonably so.

Yet is this really analogous to the difference in appearance between our dreaming and waking state? Not quite, as I would argue that this analogy actually fails to do justice to the actual difference between our waking and dreaming state, a difference that is far greater than the difference between a waking experience of Istanbul and Oslo respectively. Hence, I would argue that we have no more reason to suspect that our present experience is a dream anymore than we have reason to suspect that we, say, live in a completely different city than we thought. Yes, the world, including the basis of our experience, may well turn out to be very different from what we expect in many ways. Yet the specific claim that our experience of the world is a dream — something that takes place in the brain of a sleeping person — is, I would argue, extraordinarily implausible in light of all that we know, especially the enormous difference between the character of our waking and dreaming state.

Even stronger skepticism seems justified in the case of the claim that all we experience is a computer simulation, one reason being that we simply have no evidence that computer simulations can mediate conscious minds like our own in the first place — at least no more evidence than we have for believing that, say, tomatoes can (indeed, tomatoes are in many ways far more similar to human brains in physical terms than computers are). Another good reason to be intensely skeptical is that so-called ancestor simulations are in fact impossible.

A similar degree of skepticism seems apt in the case of the claim that all we experience is the result of a brain in a vat. According to what we know from fields such as physics, chemistry and biology, there is, as Daniel Dennett shows in Consciousness Explained, no way to produce an experience like ours by stimulating a brain in a vat. And if we dismiss such knowledge, we might as well dismiss our belief in the existence of brains in the first place — itself a belief about physics and biology that we do not seem justified in granting a more privileged status than we do other solid facts found in the canons of physics and biology.

And since we are dealing with various skeptical hypotheses, it seems worth pointing out that skepticism about the existence of other minds is on no firmer ground, as it indeed has the exact same epistemic status as doubting the existence of brains does. The existence of brains is only known through our own conscious experience, an experience that, according to what is known in that experience itself, is mediated by a physical brain. Based on this, we draw an inferential arrow that connects our experience to physical brains. We go from experience to physical brain. Therefore, drawing an arrow from brain to experience — whether one’s own or that of others — which is really just to draw the exact same arrow in the opposite direction, is no more problematic. Conclusively, doubting the existence of other minds is really no more reasonable than doubting the existence of one’s own brain.

One may argue that there is a difference when we are talking about different brains from our own, yet one could say the same about one’s future or past brain, which is also different from one’s present one. If one believes that one’s own future brain will be conscious — a brain that is similar yet still different from one’s present one — then how can one maintain that the brains of other beings that are also similar, yet different from one’s present brain are not conscious as well? Similarly, if one believes that one’s ever-changing brain has mediated conscious states in the past, why should the different brain states of others not mediate consciousness as well? To believe they do not is simply inconsistent.

The problem with skeptical conjectures such as the dream and the simulation hypothesis is, again, that they hold that virtually all the appearances we know from our experience are false, yet the appearance of the possibility that the basis of our experience is something radically different from what we thought — yet still something that we know of from our experience, such as the notion of a dream or a simulation — is not subjected to such doubt at all (in spite of an absence of good reasons for believing in such possibilities in the first place). In other words, these conjectures rest on arbitrarily constrained skepticism.

More than that, these skeptical hypotheses also seem to undermine themselves. For if we accept the premise that our experience indeed is a simulation or a dream, what reason do we have for believing that the worldview we are able to draw from it, including any conclusion about dreams and simulations, has any validity beyond our own simulation or dream? If we are living in a dream or a simulation, it seems that what we think we can say with any certainty about the world, including about dreams and simulations, is likely to be wrong to an unimaginable degree, since it is all based on pure dream or simulation itself. Conclusively, accepting any of these conjectures seems to force us to doubt them strongly, even to make it difficult to make sense of them. And being self-undermining is not a virtue of a conjecture.

Again, what we do when we assess the truth of a proposition is, ideally, to judge its plausibility in light of the totality of what we know. And this is exactly what we fail to do when we deem skeptical conjectures of this sort likely. We go with peculiar arguments, propositions, and concepts, and then doubt everything else, thereby ignoring that the meaning, even the coherence, of these arguments and concepts rest, in subtle and not so subtle ways, on all this other knowledge that they supposedly imply we should doubt, thereby inadvertently destroying their own foundations.

Keeping the totality of our knowledge in view and applying our skepticism consistently leads us, I maintain, to a relatively common sense view of the world, at least when it comes to the basics of the basis of our experience. What we know about the world hints that our experience is mediated by a biological brain just as strongly as our experience hints that the Earth is round; nothing really suggests it is not. In my view, we have no good reason to believe that what we experience is, or even could be, a dream or a simulation, while a very great deal — including consistent thinking based on what we know — strongly suggests it is not.


This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2016/11/induction-is-all-we-got.html

Blog at WordPress.com.

Up ↑