In Defense of Nuance

The world is complex. Yet most of our popular stories and ideologies tend not to reflect this complexity. Which is to say that our stories and ideologies, and by extension we, tend to have insufficiently nuanced perspectives on the world.

Indeed, falling into a simple narrative through which we can easily categorize and make sense of the world — e.g. “it’s all God’s will”; “it’s all class struggle”; “it’s all the muslims’ fault”; “it’s all a matter of interwoven forms of oppression” — is a natural and extremely powerful human temptation. And something social constructivists get very right is that this narrative, the lens through which we see the world, influences our experience of the world to an extent that is difficult to appreciate.

So much more important, then, that we suspend our urge to embrace simplistic narratives to (mis)understand the world through. In order to navigate wisely in the world, we need to have views that reflect its true complexity; not views that merely satisfy our need for simplicity (and social signaling; more on this below). For although simplicity can be efficient, and to some extent is necessary, it can also, when too much too relevant detail is left out, be terribly costly. And relative to the needs of our time, I think most of us naturally err on the side of being expensively unnuanced, painting a picture of the world with far too few colors.

Thus, the straightforward remedy I shall propose and argue for here is that we need to control for this. We need to make a conscious effort to gain more nuanced perspectives. This is necessary as a general matter, I believe, if we are to be balanced and well-considered individuals who steer clear of self-imposed delusions and instead act wisely toward the betterment of the world. Yet it is also necessary for our time in particular. More specifically, it is essential in addressing the crisis that human conversation seems to be facing in the Western world at this point in time. A crisis that largely seems the result of an insufficient amount of nuance in our perspectives.

Some Remarks on Human Nature

There are certain facts about the human condition that we need to put on the table and contend with. These are facts about our limits and fallibility which should give us all pause about what we think we know — both about the world in general as well as ourselves in particular.

For one, we have a whole host of well-documented cognitive biases. There are far too many for me to list them all here, yet some of the most important ones are: confirmation bias (the tendency of our minds to search for, interpret, and recall information that confirms our pre-existing beliefs); wishful thinking (our tendency to believe what we wish were true); and overconfidence bias (our tendency to have excessive confidence in our own beliefs; in one study, people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time). And while we can probably all recognize these pitfalls in other people, it is much more difficult to realize and admit that they afflict ourselves as well. In fact, our reluctance to realize this is itself a well-documented bias, known as the bias blindspot.

Beyond realizing that we have fallible minds, we also need to realize the underlying context that has given rise to much of this fallibility, and which continues to fuel it, namely: our social context — both the social context of our evolutionary history as well as of our present condition. We humans are deeply social creatures, and it shows at every level of our design, including the level of our belief formation. And we need to be acutely aware of this if we are to form reasonable beliefs with minimal amounts of self-deception.

Yet not only are we social creatures, we are also, by nature, deeply tribal creatures. As psychologist Henri Tajfel showed, one need only assign one group of randomly selected humans the letter “A” and another randomly selected group the letter “B” in order for a surprisingly strong in-group favoritism to emerge. This method for studying human behavior is known as the minimal group paradigm, and it shows something about us that history should already have taught us a long time ago: that human tribalism is like gasoline just waiting for a little spark to be ignited.

This social and tribal nature of ours has implications for how we act and what we believe. It is, for instance, largely what explains the phenomenon of groupthink, which is when our natural tendency toward (in-)group conformity leads to a lack of dissenting viewpoints among individuals in a given group, which then, in turn, leads to poor decisions by these individuals.

Indeed, our beliefs about the world are far more socially influenced than we realize. Not just in the obvious way that we get our views from others around us — often without much external validation or testing — but also in that we often believe things in order to signal to others that we possess certain desirable traits, or that we are loyal to them. This latter way of thinking about our beliefs is quite at odds with how we prefer to think about ourselves, yet the evidence for this unflattering view is difficult to deny at this point.

As authors Robin Hanson and Kevin Simler argue in their recent book The Elephant in the Brain, we humans are strategically self-deceived about our own motives, including when it comes to what motivates our beliefs. Beliefs, they argue, serve more functions than just the function of keeping track of what is true of the world. For while beliefs surely do have this practical function, they also often serve a very different, very social function, which is to show others what kind of person we are and what kind of groups we identify with. This makes beliefs much like clothes, which have the practical function of keeping us warm while, for most of us, also serving the function of signaling our taste and group affiliations. And one of Hanson’s and Simler’s essential points is that we are not aware of the fact that we do this, and that there is an evolutionary reason for this: if we realized (clearly) that we believe certain things for social reasons, and if we realized that we display our beliefs with overconfidence, we would be much less convincing to those we are trying to convince and impress.

Practical Implications of Our Nature

This brief survey of the natural pitfalls and fallibilities of our minds is far from exhaustive, of course. But it shall suffice for our purposes. The bottom line is that we are creatures who naturally want our pre-existing beliefs confirmed, and who tend to display too high levels of confidence about these beliefs. We do this in a social context, and many of the beliefs we hold serve non-epistemic functions within this context, which include the tribal function of showing others how loyal we are to certain groups, as well as how worthy we are as friends and mates. In other words, we have a natural pull to impress our peers, not just with our behavior but also with our beliefs. And, for socially strategic reasons, we are quite blind to the fact that we do this.

So what, then, is the upshot of all of this? It is clear, I submit, that these facts about ourselves do have significant implications for how we should comport ourselves. In short, they imply that we have a lot to control for if we aspire to have reasonable beliefs — and our own lazy mind, with all its blindspots and craving for simple comfort, is not our friend in this endeavor. The fact that we are naturally biased and tendentious implies that we should doubt our own beliefs and motives. And it implies that we need to actively seek out the counter-perspectives and nuance that our confirmation bias, this vile bane of reason, so persistently struggles to keep us from accessing.

Needless to say, these are not the norms that govern our discourse at this point in time. Sadly, what plays out right now is mostly the unedited script of tribal, confirmation biasing human nature, unfazed by the prefrontal interventions that seem just about the only hope for our rewriting this script into something better.

The Virtues of the Good Conversationist

Let us elaborate a bit on the implications of our fallibility, and the precepts we should follow if we want to control for these unflattering tendencies and pitfalls of human nature. Recall the study cited above: people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time. The fact that we can be so wrong — more than 15 percent of the time when we claim perfect certainty(!) — implies, among other things, that when someone tells us we are wrong, we seem to have a prima facie reason to listen and try our best to understand what they are saying, as they may just be right. Of course, the tendency toward overconfidence will all but surely be shared by this other person as well, who could also be wrong. And our task then lies in finding out which it is. This is the importance of conversation. It is nothing less than the best tool we have, collectively, against being misguided. And that is why we have to become good conversationists.

What does it take to become that? At the very least, it requires an awareness of our biases, and a deliberate effort to counteract them.

Countering Confirmation Bias

To counteract our confirmation bias, we need to loosen our attachment to pre-existing beliefs, and to seek out viewpoints and arguments that may contradict them. The imperative of doing this derives from nothing less than the basic epistemic necessity of taking all relevant data into consideration rather than a small cherry-picked selection. For the truth is that we all cherry-pick data a little bit here and there in favor of our own position, and so by hearing from people with opposing views, and by examining their cherry-picked data and their particular emphasis and interpretation, we will, in the aggregate, tend to get a more balanced picture of the issue at hand.

And, importantly, we should strive to engage with these other views in a charitable way: by assuming good faith on behalf of the proponents of any position; by trying to understand their view as well as possible; and by then engaging with the strongest possible version of that position (i.e. the steel man rather than the straw man version of it). Indeed, it is difficult to overstate just how much the state of human conversation would improve if we all just followed this simple precept: be charitable.

Countering Wishful Thinking

Our propensity for wishful thinking should make us skeptical of beliefs that are convenient and which match up with what we want to be true. If we want there to be a God, and we believe there is one, then this should make us at least a little skeptical of this convenient belief. By extension, our attraction toward the wishful also implies that we should pay more attention to information and arguments that suggest conclusions which are inconvenient or otherwise contrary to what we wish were true. Do we believe the adoption of a vegan lifestyle would be highly inconvenient for us personally? Then we should probably expect to be more than a little biased against any argument in its favor, and indeed, if we suspect the argument has merit, be inclined to ignore it altogether rather than giving it a fair hearing.

Countering Overconfidence Bias

When it comes to correcting for our overconfidence bias, the key virtue to embrace is intellectual humility (or at least so it seems to me). That is, to admit and speak as though we have a limited and fallible perspective on things. In this respect, it also helps to be aware of the social factors that might be driving our overconfidence much of the time. As noted above, we often express certainty in order to signal to third parties, as well as to instill strong doubts in those we engage with. And we do this without being aware of it. This social function of confidence should lead us to update away from bravado and toward being more measured. Again: to be intellectually humble.

Countering In-Group Conformity

Another way in which social forces make us less than reasonable is by compelling us to conform to our peers. As hinted above, our beliefs are subject to in-group favoritism, which highlights the importance of being (especially) skeptical of the beliefs we share with groups that we affiliate closely with, and to practice playing the devil’s advocate against these beliefs. And, by extension, to try to be extra charitable toward the beliefs held by the notional out-group, whether it be “the Left” or “the Right”, “the religious” or “the atheists”.

Beyond that, we should also be aware that our minds likely often paint the out-group in an unfairly unfavorable light, viewing them as much less sincere and well-intentioned — one may even say more evil — than they actually are, however misguided (we may think) their particular views are. And it seems a natural temptation for us to try to score points by publicly broadcasting such a negative view of the out-group as a way of showing our in-group just how unlikely we are to change affiliation.

Thinking in Degrees of Certainty

It seems that we have a tendency to express our views in a very binary, 0-or-1 fashion. We tend to be either clearly for something or clearly against it, be it abortion, efforts to prevent climate change, the death penalty, or universal health care. And it seems to me that what we express outwardly is generally much more absolutist, i.e. more purely 0 or 1, than what happens inwardly, under the hood — perhaps even underneath our conscious awareness — where there is probably more conflicting data than what we are aware of and allow ourselves to admit.

I have observed this pattern in conversations: people will argue strongly for a given position which they continue to insist on, until, quite suddenly it seems, they say that they accept the opposite conclusion. In terms of their outward behavior, they went from 0 to 1 quite rapidly, although it seems likely that the process that took place underneath the hood was much more continuous — a more gradual move from 0 to 1, where the signal “express 1 now” was then passed at some threshold.

An extreme example of similar behavior found in recent events is that of Omarosa Manigault Newman, who was the so-called Director of African-American Outreach for Donald Trump’s presidential campaign in 2016. She went from describing Trump in adulating terms and calling him “a trailblazer on women’s issues”, to being strongly against him and calling him a racist and a misogynist. It seems unlikely that this shift was based purely on evidence she encountered after she made her adulating statements. There probably was a lot of information in her brain that contradicted the claim of Trump’s status as such a trailblazer, but which she ignored and suppressed. And the reason why is quite obvious: she had a political aim. She needed to broadcast the message that Trump was a good person to further a campaign and to further her own career tied to this campaign. It was about signaling first, not truth-tracking (which is not to say that she did not sincerely believe what she said, but her sincere belief was probably just conveniently biased).

The important thing to realize, of course, is that this applies to all of us. We are all inclined to be more like a politician than a scientist in many situations. In particular, we are all inclined to believe and express either a pure 0 or a pure 1 for social reasons. And the nature of these social reasons may vary. It may be about signaling opposition to someone who believes the opposite, or about signaling loyalty to a given group (few groups rally around low-credence claims). It may also be about signaling that we have a mind that is of a strong conviction. After all, doubt is generally not sexy. Just consider the words we usually associate with it, such as uncertainty, confusion, and indecision. Certainty, on the other hand, signals strength, and is commonly associated with more positive words such as decisiveness, confidence, resoluteness, and firmness. And so, for this reason as well, it only seems natural that we would generally be inclined to signal certainty rather than doubt, even when we do not possess anything close to justified certainty.

Fortunately, there exists a corrective for our tendency toward 0-or-1 thinking, which is to think in terms of credences along a continuum, ranging from 0 to 1. For one, this would constitute a more honest form of communication, in that it would force us to carefully weigh all the information that our brain keeps hidden from us, as well as to express its underlying credence in detail — as opposed to merely expressing whether this credence has crossed some given threshold. Yet perhaps even more significantly, thinking in terms of such a continuum would also help subvert the tribal aspect of our either-or thinking by placing us all in the same boat: the boat of degrees of certainty, in which the only thing that differs between us is our level of certainty in any given claim. For example, think how strange it would be for a religious believer to present their religious beliefs by saying that their credence in the existence of a God lies around 93 percent. This is a much weaker statement, in terms of its social signaling function, than a statement such as “I am a Christian”.

Such an honest, more detailed description of one’s beliefs is not good for keeping groups divided by different beliefs. Indeed, it is good for the exact opposite: it helps us move toward a more open and sincere conversation about what we in fact believe and why, regardless of our group affiliations.

Different Perspectives Can Be Equally True

There are two common definitions of the term “perspective” that are quite different, yet closely related at the same time. One is “a mental outlook/point of view”, while the other is “the art of representing three-dimensional objects on a two-dimensional surface”. And they are related in that the latter can be viewed as a metaphor for the former: our particular perspective, the representation of the world we call our point of view, is in a sense a limited two-dimensional representation of a more complex, multi-dimensional reality. A representation that is bound to leave out a lot of information about this reality. The best we can do, then, is to try to paint the two-dimensional canvas that is our mind so as to make it as rich and informative as possible about the complex and many-faceted world we inhabit.

And an important point for us to realize in our quest for more balanced and nuanced views, as well as for the betterment of human conversation, is to realize that seemingly conflicting reports of different perspectives on the same underlying reality can in fact all be true, as hinted by the following illustrations:

 

Image result for perspective

Image result for perspective

Image result for perspective perceptual

 

The same object can have very different reflections when viewed from different angles. Similarly, the same events can be viewed very differently by different people who each have their own unique dispositions and prior experiences. And these different views can all be true; John really does see X when he looks at this event, while Jane really does see Y. And, like the square- and circle-shaped reflections above, X and Y need not be incompatible. (A similar sentiment is reflected in the Jain doctrine of Anekantavada.)

And even when someone does get something wrong, they may nonetheless still be reporting the appearance of the world as it is revealed to them as honestly and accurately as they can. For example, to many of us, it really does seem as though the lines in the following picture are not parallel, although they in fact are:

 

Image result for visual illusions

 

Which is merely to state the obvious point that it is possible, indeed quite common, to be perfectly honest and wrong at the same time, which is worth keeping in mind when we engage with people whom we think are obviously wrong; they usually think they are right, and that we are obviously wrong — and perhaps even dishonest.

Another important point the visual illusion above hints at is that we should be careful not to confuse external reality with our representation of it. Our conscious experience of the external world is not, obviously, the external world itself. And yet we tend to speak as though it were; as though our experience of the external world were not a sophisticated representation, but instead the external world as it is in itself.

This is an evolutionarily adaptive illusion no doubt, but it is an illusion nonetheless. All we ever inhabit is, in the words of David Pearce, our own world simulation, a world of conscious experience residing in our head. And given that we all find ourselves stuck in — or indeed as — such separate, albeit mutually communicating bubbles, it is not so strange that we can have so many disagreements about what we think reality is like. All we have to go on is our own private, phenomenal cartoon model of each other and the world at large; a cartoon model that may get many things right, but which is also sure to miss a lot of important things.

Framing Shapes Our Perspective

From the vantage point of our respective world simulations, we each interpret information from the external world with our own unique framing. And this framing in part determines how we will experience it, as demonstrated by the following illustration, where one can change one’s framing so as to either see a duck or a rabbit:

 

Image result for duck rabbit

 

As well as the following illustration where one’s framing determines whether one sees a cube from above or below — or indeed just a two dimensional pattern without depth:

 

Image result for dual perspective picture

Sometimes, as in the examples above, our framing is readily alterable. In other cases, however, it can be more difficult to just switch our framing, as when it comes to how different people with different life experiences will naturally interpret the same scenario in very different ways. For instance, a physicist might enter a room and see a lot of interesting physical phenomena there. Air consisting of molecules which bounce around in accord with the laws of thermodynamics; sound waves that travel adiabatically across the room; long lamps dangling in their natural frequency while emitting photons. An artistic person, in contrast, may enter the same room and instead see a lot of people. And this person may view these people as a sea of flowing creative potential in the process of being unleashed, inspired by deeply emotional music and a warm glowing light that fits perfectly with the atmosphere of the music.

Although these two perspectives on the events of this same room are very different, none of them are necessarily wrong. Indeed, they seem perfectly compatible, despite their representing what seems to be two very different cognitive styles — two different paradigms of thinking and perceiving, one may say. And what is important to realize is that a similar story applies to all of us. We all experience the world in different ways, due to our differing biological dispositions, life experiences, and vantage points. And while these different experiences are not necessarily incompatible, it can nonetheless be difficult to achieve mutual understanding between such differing perspectives.

Acknowledging Many Perspectives Is Not a Denial of Truth

It should be noted, however, that none of the above makes a case for the relativistic claim that there are no truths. On the contrary, what the above implies is indeed that it is a truth — as hard and strong as could be — that different individuals can have different perspectives and experiences in reaction to the same external reality, and that it is possible for such differing perspectives to all have merit, even if they seem in tension with each other. And to acknowledge this fact by no means amounts to the illogical statement that no given perspective can ever be wrong and make false claims about reality — that, sadly, is clearly all too common. This middle-position of rejecting both the claim that there is only one valid perspective and the claim that there are no truths is, I submit, the only reasonable one on offer.

And the fact that there can be merit in a plurality of perspectives implies that, beyond conceiving of our credences along a continuum ranging from 0 to 1, we also need to think in terms of a diversity of continua in a more general sense if we are to gain a fuller, more nuanced understanding that does justice to reality, including the people around us with whom we interact. More than just thinking in terms of shades of grey found in-between the two endpoints of black and white, we need to think in terms of many different shades of many different colors.

At the same time, it is also important to acknowledge the limits of our understanding of other minds and experiences we have not had. This does not amount to some obscure claim about how we each have our own, wholly incommensurable experiences, and hence that mutual understanding between individuals with different backgrounds is impossible. Rather, it is simply to acknowledge that psychological diversity is real, which implies that we should be careful to avoid the so-called typical mind fallacy, as well as to acknowledge that at least some experiences just cannot be conveyed faithfully with words alone to those who have not had them. And this does, at the very least, pose a challenge to the endeavor of communicating with and understanding each other. For example, most of us have never tried experiencing extreme forms of suffering, such as the experience of being burned alive. And beyond describing this class of experiences with thin yet accurate labels such as “horrible” and “bad”, most of us are surely very ignorant — luckily for us.

However, this realization that we do not know what certain experiences are like is in fact itself an important insight that does help expand and advance our outlook. For it at least helps us realize that our own understanding, as well as the range and variety of experiences we are familiar with, are far from exhaustive. With this realization in mind, we can look upon a state of absolute horror and admit that we have virtually no understanding of just how bad it is, which, I submit, comprises a significantly greater understanding than does beholding it with both the same absence of comprehension, and the absence of the admission of this absent comprehension. The realization that we are ignorant itself constitutes knowledge of sorts. The kind of knowledge that makes us rightfully humble.

Grains of Truth in Different Perspectives

Even when two different perspectives indeed are in conflict with each other, this does not imply that they are necessarily both entirely wrong, as there can still be significant grains of truth in both of them. Most of today’s widely endorsed perspectives and narratives make a wide range of claims and arguments, and even if not all of these stand up to scrutiny, many of them often do, at least when modified slightly. And part of being charitable is to seek out such grains of truth in a position one does not agree with. This can also help us realize which truths and plausible claims that might motivate people to support (what we consider) misguided views, and thus help further mutual understanding among us. Therefore, this seems a reasonable precept to follow as well: sincerely ask what might be the grains of truth in the views you disagree with. One can almost always find something, and often a good deal more than one would naively have thought.

As mentioned earlier, it is also possible for different perspectives to support what seems to be very different positions on the same subject without necessarily being wrong in any way; if they have different lenses, looking in different directions. Indeed, different perspectives on the same issue are often merely the result of different emphases which each focus on certain framings and sets of data rather than others. And thus seemingly incompatible perspectives may in fact all be right about the particular aspects of a given subject that they emphasize, which is why it is important to seek out different treatments of the same subject from multiple angles. Oftentimes, it is not that novel perspectives show our current perspective wrong, but merely that it is insufficiently unnuanced — i.e. that we have failed to take certain things into account, such as alternative framings, particular kinds of data, and critical counter-considerations.

This is, I believe, a common pattern in human conversation, and another sense in which we should be mindful of the possible existence of different grains of truth, namely: when different views on the same subject are all completely true, yet where each of them merely comprise a small grain in the larger mosaic that is the complete truth. And hence we should remind ourselves, as stated in the illustration above, that just because we are right does not mean that the person who says something else on the same subject is wrong.

Having made a general case for nuance, let us now turn our eyes toward our time in particular, and why it is especially important to actively seek to be nuanced and charitable today.

Our Time Is Different

Every period in history likely sees itself as uniquely unique. Yet in terms of how humanity communicates, it is clear that our time indeed is a highly unique one. For never before in history has human communication been so screen-based as it is today. Or, expressed equivalently: never before has so much of our communication been without face-to-face interaction. And this has significant implications for how and what we communicate.

It is clear that our brains process communication through a screen in a very different way. Writing a message in a Facebook group consisting of a thousand people does not, for most of us, feel remotely the same as delivering the same message in front of a thousand people crowd. And a similar discrepancy between the two forms of communication is found when we interact with just a single person, which is no wonder. Communication through a screen consists of a string of black and white symbols. Face-to-face interaction, in contrast, is composed of multiple streams of information. We read off important cues from a person’s face and posture, as well as from the tone and pace of their voice.

All this information provides a much more comprehensive, one might indeed say more nuanced, picture of the state of mind of the person we are interacting with. We get the verbal content of the conversation (as we would through a screen), plus a ton of information about the emotional state of the other. And beyond being informative, this information also serves the purpose of making the other person relatable. It makes the reality of their individuality and emotions almost impossible to deny, which is much less true when we communicate through a screen.

Indeed, it is as though these two forms of communication activate entirely different sets of brain circuits. Not only in that we communicate via a much broader bandwidth and likely see each other as more relatable when we communicate face-to-face, but also in that face-to-face communication naturally motivates us to be civil and agreeable. When we are in the direct physical presence of someone else, we have a strong interest in keeping things civil enough to allow our co-existence in the same physical space. When we interact through a screen, however, this is no longer a necessity. The notional brain circuitry underlying peaceful co-existence with antagonists can more safely be put on stand-by mode.

The reality of these differences between the two forms of communication has, I would argue, some serious implications. First of all, it highlights the importance of being aware that these two forms of communication indeed are very different, and that we are, in various ways, quite handicapped communicators when we communicate through a screen, often entering a state of mind that perhaps only a sociopath would be able to maintain in a face-to-face interaction. A handicap that further implies that we should be even more aware of the tendencies reviewed above when interacting through a screen, as these tendencies then become much easier and more tempting to engage in. It is (even) more difficult to relate to those who disagree with us, and we have (even) less of an incentive to understand them properly and be civil. Which is to say that it is (even) more difficult to be charitable. Written communication through a screen makes it easier than ever before to paint the out-group antagonists we interact with in an unreasonably unfavorable light.

And our modern means of communication arguably also make it easier than ever before to not interact with the out-group at all, as the internet has made it possible for us to diverge into our own respective in-group echo chambers to an extent not possible in the past. It is therefore now easy to end up in communities in which we continuously echo data that supports our own narrative, which ultimately gives us a one-sided and distorted picture of reality. And while it may be easier than ever to find counter-perspectives if we were to look for them, this is of little use if we mostly find ourselves collectively indulging in our own in-group confirmation bias. As we often do. For instance, feminists may find themselves mostly informing each other about how women are being discriminated against, while men’s rights activists may disproportionally share and discuss ways in which men are discriminated against. And so by joining only one of these communities, one is likely to end up with a skewed, insufficiently nuanced view of reality.

This mode of interaction has serious sociological implications. Indeed, the change in our style of interaction brought about by the internet is probably in large part why, in spite of the promise technology seemed to hold to connect us with each other, we now appear increasingly balkanized, divided along various lines in ways that feed into our tribal nature all too well. Democrats and republicans, for example, increasingly see each other as a “threat to the nation’s well-being” — significantly more so than they did even just ten years ago. This is a real problem that does not seem to be going away on its own. And one of the greatest hopes we have for improving this situation is, I submit, to become aware of and actively try to control for our own pitfalls. Especially when we interact through screens.

With all the information we have reviewed thus far in mind, let us now turn to some concrete examples of heated issues that divide people today, and where more nuanced perspectives and a greater commitment to being charitable are desperately needed. (I should note, however, that given the brevity of the following remarks, what I write here on these issues is, needless to say, itself bound to fail to express a highly nuanced perspective, as that would require a longer treatment. Nonetheless, the following brief remarks will at least gesture at some ways in which we can generally be more nuanced about these topics.)

Sex Discrimination

As hinted above, there are two groups that seem to tell very different stories about the state of sex discrimination in our world today. On the one hand there are the feminists, who seem to argue that women generally face much more discrimination than men, and on the other, there are the so-called men’s rights activists, who seem to argue that men are, at least in some parts of the world, generally the more discriminated sex. And these two claims surely cannot both be right, can they?

If one were to define sex discrimination in terms of some single general measure, a “General Discrimination Factor”, then no, they could not both be right. Yet if one instead talks about concrete forms of discrimination, then it is entirely possible, and indeed clearly the case, that women are discriminated against more than men in some respects, while men face more discrimination in other respects. And it is arguably also much more fruitful to talk about such concrete cases than it is to talk about discrimination “in general”. (In response to those who insist that it is obvious that women face more discrimination everywhere, almost regardless of how one constructs such a general measure, I would recommend watching the documentary The Red Pill, and, for a more academic treatment, reading David Benatar’s The Second Sexism.)

For example, it is a well-known fact that women have, historically, been granted the right to vote much later than men have, which undeniably constitutes a severe form of discrimination against women. Similarly, women have also historically been denied the right to take a formal education, and they still are in many parts of the world. In general, women have been denied many of the opportunities that men have had, including access to professions in which they were clearly more than competent to contribute. These are all undeniable facts about undeniably severe forms of discrimination.

However, tempting as it may be to infer, none of this implies that men have not also faced severe discrimination in the past, nor that they evade such discrimination today. For example, it is generally only men who have been subject to conscription — i.e. forced duty to enlist for state service, such as in the military. Historically, as well as today, men have been forced by law to join the military and go to war, often without returning — whether they wanted to or not (sure, some men wanted to join the military, yet the fact that some men wanted to do this does not imply that making it compulsive for virtually all men and only men is not discriminatory; as a side note, it should be noted that many feminists have criticized conscription).

Thus, at a global level, it is true to say that, historically as well as today, women have generally faced more discrimination in terms of their rights to vote and pursue an education, as well as in their professional opportunities in general, while men have faced more discrimination in terms of state-enforced duties.

Different forms of discrimination against men and women are also present at various other levels. For example, in one study where the same job application was sent to different scientists, and where half of the applications had a female name on them, the other half a male name, the “female applicants” were generally rated as less competent, and the scientists were willing to pay the “male applicants” more than 14 percent more.

The same general pattern seems reported by those who have conducted a controlled experiment in being a man and a women from “the inside”, namely from transgender men (those who have transitioned from being a woman to being a man). Many of these men report being viewed as more competent after their transition, as well as being listened to more and interrupted less. This also fits with the finding that both men and women seem to interrupt women more than they interrupt men.

At the same time, many of these transgender men also generally report that people seem to care less about them now that they are men. As one transgender man wrote about the change in his experience:

What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.

Such anecdotal reports seem well in line with the finding that both men and women show more aggression toward men than women, as well as with recent research (see page 137) conducted by social psychologist Tania Reynolds, which among other things found that:

[…] female harm or disadvantage evoked more sympathy and outrage and was perceived as more unfair than equivalent male harm or disadvantage. Participants more strongly blamed men for their own disadvantages, were more supportive of policies that favored women, and donated more to a female-only (vs male-only) homeless shelter. Female participants showed a stronger in-group bias, perceiving women’s harm as more problematic and more strongly endorsed policies that favored women.

As these examples show, it seems that men and women are generally discriminated against in different ways. And it is worth noting that these different forms of discrimination are probably in large part the natural products of our evolutionary history rather than some deliberate, premeditated conspiracy (which is obviously not to say that they are ethically justified).

Yet deliberation and premeditation is exactly what is required if we are to step beyond such discrimination. More generally, what seems required is that we get a clearer view of the ways in which women and men face discrimination, and that we then take active steps toward remedying these problems. Something that is only possible if we allow ourselves enough of a nuanced perspective to admit that both women and men are subject to serious discrimination and injustice.

Intersectionality

It seems that many progressives are inspired by the theoretical framework called intersectionality, according to which we should seek to understand many aspects of the modern human condition in terms of interlocking forms of power, oppression, and privilege. One problem with relying on this framework is that it can easily become like only seeing nails when all one has is a hammer. If one insists on understanding the world predominantly in terms of oppression and social privilege, one risks seeing it in many places where it is not, as well as overemphasizing its relevance in many cases — and, by extension, to underemphasize the importance of other factors.

As with most popular ideas, there is no doubt a significant grain of truth in some of what intersectional theory talks about, such as the fact that discrimination is a very real phenomenon, that privilege is too, and that both of these phenomena can compound. Yet the narrow focus on only social explanations and versions of these phenomena means that intersectional theory misses a lot about the nature of discrimination and privilege. For example, some people are privileged to be born with genes that predispose them to be very happy, while others have genes that dispose them to have chronic depression. Such two people may be of the same race, gender, and sexuality, and they may be equally able-bodied. Yet such two people will most likely have very different opportunities and quality of life. A similar thing can be said about genetic differences that predispose individuals to have a higher or lower IQ, as well as about genetic differences that make people more or less physically attractive.

Intersectional theory seems to have very little to say about such cases, even as these genetic factors seem able to impact opportunities and quality of life to a similar degree as discrimination and social exclusion. Indeed, it seems that intersectional theory actively ignores, or at the very least underplays, the relevance of such factors — what may be called biological privileges in general — perhaps because they go against the tacit assumption that inequity and other bad things must be attributable to an oppressive agent or social system in some way, as opposed to just being the default outcome one should expect to find in an apathetic universe.

In general, it seems that intersectional theory significantly underestimates the importance of biology, which is, of course, by no means a mistake that is unique to intersectionality in particular. And it is indeed understandable how such an underestimation can emerge. For the truth is that many of the most relevant human traits, including those of personality and intelligence, are strongly influenced by both genetic and environmental factors. Indeed, around 40-60 percent of the variance of such traits tends to be explained by genetics, and, consequently, the amount of variance explained by the environment lies roughly in this range as well. This means that, with respect to these traits, it is both true to say that cultural factors are extremely significant, and to say that biological factors are extremely significant. And the mistake that many seem to make, including many proponents of intersectionality, is to believe that one of these truths rules out the other.

Another critique one can direct toward intersectional theory is that it often makes asymmetrical claims about how one group, “the privileged”, are unable to understand the experiences of another group of individuals, “the unprivileged”, whatever form the privilege and lack thereof may take. Yet it is rarely conceded that this argument can also, with roughly as much plausibility, be made the other way around: that the (allegedly) unprivileged might not fully understand the experience of the (allegedly) privileged, and that they may, in effect, overstate the differences in their experience, and overstate how easy the (allegedly) privileged in fact have it. A commitment to intellectual openness and honesty would at least require us to not dismiss this possibility out of hand.

A similar critique that intersectional theorists ought to contend with is that some of the people whom intersectional theory maintains are discriminated against and oppressed themselves argue that they are not, and some indeed further argue that many of the solutions and practical steps supported by intersectional theorists are often harmful rather than beneficial. Such voices must, at least, be counted as weak anomalies relative to the theory, and considered worthy of serious engagement.

More generally, a case can be made that intersectional theory greatly overemphasizes group membership and identities in its analyses of and attempts to address societal problems. As Brian Tomasik notes:

[…] I suspect it’s tempting for our tribalistic primate brains to overemphasize identity membership and us-vs.-them thinking when examining social ills, rather than just focusing on helping people in general with whatever problems they have. For example, I suspect that one of the best ways to help racial minorities in the USA is to reduce poverty (such as through, say, universal health insurance), rather than exploring ever more intricate nuances of social-justice theory.

A regrettable complication that likely bolsters the focus of intersectionalists is that many people seem to flatly deny that there are any grains of truth to any of the claims intersectional theory makes. Some claim, for instance, that there is no such thing as being transgendered, and that there barely is such a thing as racial or sex discrimination in the Western world today. Rather than serving as a meaningful critique of the overreaches of intersectionality, such unnuanced and ill-informed statements seem likely to only help convince intersectionalists that they are uniquely right while others are dangerously wrong, as well as to suggest to them that more radical tactics may be needed, since current tactics clearly do not work to make other people see basic reality for what it is.

This speaks to the more general point that if we make measured views a rarity, and convince ourselves that all one can do is join either team A or team B — e.g. “camp discrimination exists” or “camp discrimination does not exist” — then we only push people toward division. We risk finding ourselves in a run-away spiral where people try to eagerly signal that they do not belong to the other team, which may in turn push us toward ever more extreme views. The alternative option to this tribal game is to simply aspire toward, and express, measured and nuanced views. That might just be the best remedy against such polarization and toward reasonable consensus. Whether our tribal brains indeed want such a consensus is, of course, a separate question.

A final critique I would direct at mainstream intersectional theory is that, despite its strong focus on unjustified discrimination, it nonetheless generally fails to acknowledge and examine what is, I have argued, the greatest, most pervasive, and most harmful form of discrimination that exists today, namely: speciesism, the unjustified discrimination against individuals based on their species membership. The so-called argument from species overlap is rarely examined, nor are the implications that follow, including when it comes to what equality in fact entails. This renders mainstream versions of intersectionality, as a theory of discrimination against vulnerable individuals, a complete failure.

Political Correctness

Another controversial issue closely related to intersectionality is that of political correctness. What do we mean by political correctness? The answer is actually not straightforward, since the term has a rather complex history throughout which it has had many different meanings. Yet one sense of the term that was at least prominent at one point refers simply to conduct and speech that embodies fairness and common decency toward others, especially in a way that avoids offending particular groups of people. In this sense of the term, political correctness is about not referring to people with ethnic slurs, such as “nigger” and “paki”, or homophobic slurs, such as “faggot” and “dyke”. A more recent sense of the term, in contrast, refers to instances where such a commitment to not offend people has been taken too far (in the eyes of those who use the term), which is arguably the sense in which it is most commonly used today.

This then leads us to what seems the quintessential point of contention when it comes to political correctness, namely: what is too far? What does the optimum level of decency entail? And the only reasonable answer, I believe, will have to be a nuanced one found between the two extremes of “nothing is too offensive” and “everything is too offensive”.

Some seem to approach this subject with the rather unnuanced attitude that feelings of being offended do not matter in any way whatsoever. Yet this view seems difficult to maintain, however, at least if one is called a pejorative name in an unjoking manner oneself. For most people, such name-calling is likely to hurt — indeed, it can easily hurt quite a lot. And significant amounts of hurt and unpleasantness do, I submit, matter. A universe with fewer, less intense feelings of offense is, other things being equal, better than a universe with more, more intense feelings of offense.

Yet the words “other things being equal” should not be missed here. For the truth is that there can be, indeed there clearly is, a tension between 1) risking to offend people and 2) talking freely and honestly about the realities of life. And it is not clear what the optimal balance is.

Yet what is quite clear, I would argue, is that if we cannot talk in an unrestricted way about what matters most in life, then we have gone too far. In particular, if we cannot draw distinctions between different kinds of discrimination and forms of suffering, and if we are not allowed to weigh these ills against each other to assess which are most urgent, then we have gone too far. For if we deny ourselves a clear sense of proportion with respect to the problems of the world, we end up undermining our ability to sensibly prioritize our limited resources in a world that urgently demands reasonable prioritization. And this is, I submit, much too high a price to pay to avoid the risk of offending people.

Relationship Styles and Promiscuity

Another subject that a lot of people seem to express quite strong and unnuanced positions on is that of sexual promiscuity and relationship styles. For example, some claim that strict monogamy is the only healthy and viable choice for everybody, while others seem to make more or less the same claim about polyamory: that most people would be happier if they were in loving, sexual relationships with more than one person, and that only our modern culture prevents us from realizing this. Similar opinions can be found on the subject of casual sex. Some say it is not a big deal, while others say it is — for everyone.

An essential thing to acknowledge on this subject, it seems, is the reality of individual differences. Most of these strong opinions seem to arise from the fallacious assumption that other people are significantly much like ourselves — i.e. the typical mind fallacy. The truth is that some may well thrive best in monogamous relationships, while others may thrive best in polyamorous relationships; some may well thrive having casual sex, some may not. And in the absence of systematic studies, it is difficult to say which distribution people fall along in these respects — in terms of what circumstance people thrive best in — as well as how much this distribution can be influenced by culture.

None of this is to say that there is no such thing as human nature when it comes to sexuality, but merely that it should be considered an open question just what this nature is exactly, and how much plasticity and individual variation it entails. And we should all admit this much.

Politics and Making the World a Better Place

The subjects of politics and “how to make the world a better place” more generally are both subjects on which people tend to have strong convictions, limited nuance, and powerful incentives to signal group loyalty. Indeed, they are about as good examples as any of subjects where it is important to be charitable and actively seek out nuance, as well as to acknowledge one’s own biased nature.

A significant step we can take toward thinking more clearly about these matters is to adopt the aforementioned virtue of thinking in terms of continuous credences. Just as the expression of a “merely” high credence in the existence of the Christian God is more conducive to open-minded conversation, so is having a “merely” high credence in any given political ideology, principle, or policy likely more conducive to honest and constructive conversations and greater mutual updating.

If nothing else, the fact that the world is so complex implies that we will at least have considerable uncertainty about what the consequences of our actions will be. In many cases, we simply cannot know with great certainty which policy or candidate that is going to be best (relative to any set of plausible values) all things considered. This suggests that our strong convictions about how a given political candidate or policy is all bad, and about how immeasurably greater the alternatives would be, are likely often overstated. More generally, it implies that our estimates of which actions that are best to take, in the realm of politics in particular as well as with respect to improving the world in general, should probably be more measured and humble than they tend to be.

For example: what is your credence that Donald Trump was a better choice (with respect to your core values) than Hillary Clinton for the US presidency in 2016? I suspect most people’s credence on this question is either much too low or much too high relative to what can be justified. For even if one thinks his influence is clearly positive or clearly negative in the short term, this still leaves open the question of what the long-term effects will be. If the short-term effects are negative, for instance, it does not seem entirely implausible that there will be a counter-reaction in the future whose effects will end up being better in the long term, or vice versa. This consideration alone should dampen one’s credence somewhat — away from the extremes and closer toward the middle. A similar argument could be made about grave atrocities and instances of extreme suffering occurring today and in the near future: although it seems unlikely, we cannot exclude that these may in fact lead to a future with fewer atrocities and less suffering in the long term. (Note, however, that none of this implies that one should not fight hard for what one believes to be the best thing; even if one has only, say, a 60 percent credence in some action being better than another, it can still make perfect sense to push very hard for this seemingly better option.)

Or, to take another concrete example: would granting everyone a universal basic income be better (relative to your values) than not doing so? Again, being absolutely certain in either a positive or a negative answer to this question is hardly defensible. More reasonable, it seems, would it be to maintain a credence that lies somewhere in-between. (And in relation to what one’s underlying values are, I would argue that this is one of the very first things we need to reflect upon if we are to make a reasonable effort toward making the world a better place.)

A similar point can be made about existing laws and institutions. When we are young and radical, we have a tendency to find existing laws and social structures to be obviously stupid compared to the brilliant alternatives we ourselves envision. Yet, in reality, our knowledge of the roles played by these existing systems, as well as the consequences of our proposed alternatives, will tend to be quite limited in most cases. And it seems wise to admit this much, and to adjust our credences and plans of action accordingly.

A related pitfall worth avoiding is that of believing a single political candidate or policy to have purely good or purely bad effects; such an outcome seems extraordinarily unlikely. In the words of economist Thomas Sowell, there are no perfect solutions in the real world, only trade-offs. Similarly, it is also worth steering clear of the tendency to look to a single intellectual for the answers to all important questions. For the truth is that we all have blindspots and false beliefs, and virtually everyone is going to be ignorant of things that others would consider common knowledge. Indeed, no single person can read and reflect widely and deeply enough to be an expert on everything of importance. Expertise requires specialization, which means that we must look to different experts if we are to find expert views on a wide range of topics. In other words, the quest for a more complete and nuanced outlook requires us to engage with many different thinkers from very different disciplines.

The preceding notes about ways in which we could be more nuanced on various concrete topics are, of course, merely scratching the surface. Yet they hopefully do serve to establish the core point that nuance is essential if we are to gain a balanced understanding of virtually any complicated issue.

Can We Have Too Much Nuance?

In a piece that argues for the virtues of being nuanced, it seems worth asking whether I am being too one-sided. Might I not be overstating the case in its favor, and should I not be a bit more nuanced about the utility of nuance itself? Indeed, might we not be be able to have too much nuance in some cases?

I would be the first to admit that we probably can have too much nuance in many cases. I will grant that in situations that call for quick action, and where there is not much time to build a nuanced perspective, it may well often be better to act on one’s limited understanding rather than a more nuanced, yet harder-won picture. There are many situations like this, no doubt. Yet at the level of our public conversations, this is not often the case. We usually do have time to build a more nuanced picture, and we are rarely required to act promptly. Indeed, we are rarely required to act at all. And, unthinkable as it may seem, it could just be that expressions of agnosticism, and perhaps no public expressions at all on a given hot topic, would tend to serve everyone better than expressions of poorly considered views.

One could perhaps attempt to make a case against nuance with reference to examples where near-equal weight is granted to all considerations and perspectives — reasonable and less reasonable ones alike. This, one may argue, is a bad thing, and surely demonstrates that there is such a thing as too much nuance. Yet while I would agree that weighing arguments so blindly and undiscerningly is unreasonable, I would not consider this an example of too much nuance as such. For being nuanced does not mean giving equal weight to all arguments a posteriori, after all the relevant arguments have been presented. Instead, what it requires is that we at least consider these relevant arguments, and that we strive to be minimally prejudiced toward them a priori. In other words, the quest for appropriately nuanced perspectives demands us to grant equality of opportunity to all arguments; not equality of outcome.

Another objection one may be tempted to raise against being nuanced and charitable is that it implies that we should be submissive and over-accommodating. This does not follow, however. For to say that we should be charitable is not to say that we cannot be firm in our convictions when such firmness is justified, much less that we should ever tolerate disrespect or unfair treatment; we should not. We have no obligation to tolerate bullies and intimidators, and if someone repeatedly fails to act in a respectful, good-faith manner, we have every right, and arguably even good reason, to remove ourselves from them. After all, the maxim “assume the other person is acting in good faith” does not entail that we should not update this assumption as soon as we encounter evidence that contradicts it. And to assert one’s boundaries and self-respect in light of such updating is perfectly consistent with a commitment to being charitable.

A more plausible critique against being nuanced is that it might in some cases be strategically unwise, and that one should instead advocate for one’s views in an unnuanced, polemic manner in order to better achieve one’s objectives, at least in some cases. I think this is a decent point. Yet I think there are also good reasons to think that this will rarely be the optimal strategy when engaging in public conversations. First of all, we should acknowledge that, even if we were to grant this style of communication superior in a given situation, it still seems advantageous to possess a nuanced understanding of the counter-arguments. For, if nothing else, such an understanding would seem to make one better able to rebut these arguments, regardless of whether one then does so in a nuanced way or not.

And beyond this reason to acquire a nuanced understanding, there are also very good reasons to express such an understanding, as well as to treat the counter-arguments in as fair and measured a way as one can. One reason is the possibility that we might ourselves be wrong, which means that, if we want an honest conversation through which we can make our beliefs converge toward what is most reasonable, then we ourselves also have an interest in seeing the best and most unbiased arguments for and against different views. And hence we ourselves have an interest in moderating our own bravado and confirmation bias which actively keep us from evaluating our pre-existing beliefs as impartially as we should, as well as an interest in trying to express our own views in a measured and nuanced fashion.

Beyond that, there are also reasons to believe that people will be more receptive to one’s arguments if one communicates them in a way that demonstrates a sophisticated understanding of relevant counter-perspectives, and which lays out opposing views as strongly as possible. This will likely lead people to conclude that one’s perspective is at least built in the context of a sophisticated understanding, and it might thus plausibly be read as an honest signal that this perspective may be worth listening to.

Finally, one may object that some subjects just do not call for any nuance whatsoever. For example, should we be nuanced about the Holocaust? This is a reasonable point. Yet even here, I would argue that nuance is still important, in various ways. For one, if we do not have a sufficiently nuanced understanding of the Holocaust, we risk failing to learn from it. For example, to simply believe that the Germans were evil would appear the dangerous thing, as opposed to realizing that what happened was the result of primitive tendencies that we all share, as well as the result of a set of ideas which had a strong appeal to the German people for various reasons — reasons that we should seek to understand.

This is all descriptive, however, and so none of it implies taking a particularly nuanced stance on the ethical status of the Holocaust. Yet even in this respect, a fearless search for nuance and perspective can still be of great importance. In terms of the moral status of historical events, for instance, we should have enough perspective to realize that the Holocaust, although it was the greatest mass killing of humans in history, was by no means the only one; and hence that its ethical status is arguably not qualitatively unique compared to other similar events of the past. Beyond that, we should also admit that the Holocaust is not, sadly, the greatest atrocity imaginable, neither in terms of the number of victims it had, nor in terms of the horrors imposed on its victims. Greater atrocities than the Holocaust are imaginable. And we ought to both seriously contemplate whether such atrocities might indeed be actual, as well as to realize that there is a risk that atrocities that are much greater still may emerge in the future.

Conclusion

Almost everywhere one finds people discussing contentious issues, nuance and self-scrutiny seem to be in short supply. And yet the most essential point of this essay is not really one about looking outward and pointing fingers at others. Rather, the point is, first and foremost, that we all need to look into the mirror and ask ourselves some uncomfortable questions. Self-scrutiny can, after all, only be performed by ourselves.

“How might I be obstructing my own quest for truth?”

“How might my own impulse to signal group loyalty bias my views?”

“What beliefs of mine might mostly serve social rather than epistemic functions?”

Indeed, we all need to take a hard look in the mirror and let ourselves know that we are sure to be biased and wrong in many ways. And more than just realizing that we are wrong and biased, we also need to realize that we are limited creatures. Creatures who view the world from a limited vantage point from which we cannot fully integrate and comprehend all perspectives and modes of consciousness — least of all those we have never been close to experiencing.

We need to remind ourselves, continually and insistently, that we should be charitable and measured, and that we should seek out the grains of truth that may exist in different views so as to gain a more nuanced understanding that better reflects the true complexity of the world. Not least ought we remind ourselves that our brains evolved to express overconfident and unnuanced views for social reasons — especially in ways that favor our in-group and oppose our out-group. And we need to do a great deal of work to control for this. We should seek to scrutinize our in-group narrative, and be especially charitable to the out-group narrative.

None of us will ever be perfect in these regards, of course. Yet we can at least all strive to do better.

The Endeavor of Reason

“[…] some hope a divine leader with prophetic voice
Will rise amid the gazing silent ranks.
An idle thought! There’s none to lead but reason,
To point the morning and the evening ways.”

— Abu al-ʿAlaʾ al-Maʿarri

 

What is reason?

One could perhaps say that answering this question itself falls within the purview of reason. But I would simply define reason as the capacity of our minds to decide or assess what makes the most sense, or seems most reasonable, all things considered.

This seems well in line with other definitions of reason. For instance, Google defines reason as “the power of the mind to think, understand, and form judgements logically”, and Merriam-Webster gives the following definitions:

(1) the power of comprehending, inferring, or thinking[,] especially in orderly rational ways […] (2) proper exercise of the mind […]

These definitions all seem to raise the further question of what terms like “logically”, “orderly rational ways”, and “proper” then mean in this context.

Indeed, one may accuse all these definitions of being circular, as they merely seem to deflect the burden of defining reason by referring to some other notion that ultimately just appears synonymous with, and hence does not reductively define, reason. This would also seem to apply to the definition I gave above: “the ability to decide or assess what seems most reasonable all things considered”. For what does it mean for something to “seem most reasonable”?

Yet the open-endedness of this definition does not, I submit, render it useless or empty by any means, any more than defining science in open-ended terms such as “the attempt to discover what is true about the world” renders this definition useless or empty.

Reason: The Core Value of Universities and the Enlightenment

At the level of ideals, working out what seems most reasonable all things considered is arguably the core goal of both the Enlightenment and of universities. For instance, ideally, universities are not committed to a particular ethical view (say, utilitarianism or deontology), nor to a particular view of what is true about the world (say, string theory or loop quantum gravity, or indeed physicalism in general).

Rather, universities seem to have a more fundamental and less preconceived commitment, at least in the ideal, which is to find out which particular views, if any, that seem the most plausible in the first place. This means that all views can be questioned, and that one has to provide reasons if one wants one’s view to be considered plausible. 

And it is important to note in this context that “plausible” is a broader term than “probable”, in that the latter pertains only to matters of truth, whereas the former covers this and more. That is, plausibility can also be assigned to views, for instance ethical views, that we do not view as strictly true, yet which we find plausible nonetheless (as in: they seem agreeable or reasonable to us).

For this very reason, it would also be problematic to view the fundamental role of universities to (only) be the uncovering of what is true, as such a commitment may assume too much in many important and disputed academic discussions, such as those about ethics and epistemology, where the question of whether there indeed are truths in the first place, and in what sense, is among the central questions that are to be examined by reason. Yet in this case too, the core commitment remains: a commitment to being reasonable. To try to assess and follow what seems most reasonable all things considered.

This is arguably also the core value of the Enlightenment. At least that seems to be what Immanuel Kant argued for in his essay “What Is Enlightenment“, in which he further argued that free inquiry — i.e. the freedom to publicly exercise our capacity for reason — is the only prerequisite for enlightenment:

This enlightenment requires nothing but freedom—and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters.

And the view that reason should be our core commitment and guide of course dates much further back historically than the Enlightenment. Among the earliest and most prominent advocates of this view was Aristotle, who viewed a life lived in accordance with reason as the highest good.

Yet who is to say that what we find most plausible or reasonable is something we will necessarily be able to converge upon? This question itself can be considered an open one for reasoned inquiry to examine and settle. Kant, for instance, believed that we would all be able to agree if we reasoned correctly, and hence that reason is universal and accessible to all of us.

And interestingly, if one wants to make a universally compelling case against this view of Kant’s, it seems that one has to assume at least some degree of the universality that Kant claimed to exist. And hence it seems difficult, not to say impossible, to make such a case, and to deny that at least some aspects of reason are universal.

Being Reasonable: The Only Reasonable Starting Point?

One can even argue that it is impossible to make a case against reason in general. For as Steven Pinker notes:

As soon as we are having this conversation, as long as we are trying to persuade one another of why you should do something or should believe something, you are already committed to reason. We are not engaged in a fist fight, we are not bribing each other to believe something. We are trying to provide reasons. We are trying to persuade, to convince. As long as you are doing that in the first place — you are not hitting someone with a chair, or putting a gun to their head, or bribing them to believe something — you have lost any argument you have against reason; you have already signed on to reason, whether you like it or not. So the fact that we are having this conversation shows that we are committed to reason. That is the starting point.

Indeed, it seems that any effort to make a reasonable case against reason would have to rest on the very thing it attempts to question, namely our capacity to decide or assess what seems most reasonable all things considered. Thus, almost by definition, it seems impossible to identify a reasonable alternative to the endeavor of reason.

Some might argue that reason itself is unjustified, and that we have to have faith in reason, which then supposedly implies that a dedication to reason is ultimately no more reasonable or solid than is faith in anything whatsoever. Yet this is not the case.

For to say that reason needs justification is not to question reason, but rather to presuppose it, since the arena in which we are expected to provide reasons for what we believe is the arena of reason itself. Thus, if we accept that justifications for any given belief is required, then we have already signed on to reason, whereby we have also rejected faith — the idea that justification for some given belief is not required. Again, in trying to provide a justification for reason, or, for that matter, in trying to provide a justification for not accepting reason, one is already committed to the endeavor of reason: the endeavor of deciding or assessing what seems most reasonable, i.e. most justified, all things considered.

And what reasonable alternative could there possibly be to this endeavor? Which other endeavor could a reasoning agent reasonably choose to pursue? None, it seems to me. Universally, all reasoning agents seem bound to conclude that they have this imperative of reason: that they ought to do what seems most reasonable all things considered. That reason, in this sense, is the highest calling of such agents. Anything else would be contrary to what their own reasoning tells them, and hence unreasonable — by their own accounts.

It Seems Reasonable: The Bedrock Foundation of Reasonable Beliefs

The idea that reason demands justification for any given belief may seem problematic, as it gives rise to the so-called Münchhausen trilemma: what can ultimately justify our beliefs — a circular chain of justifications, an infinite chain, or a finite chain (or web) with brute facts at bottom? Supposedly, none of these options are appealing. Yet I disagree.

For I see nothing problematic about having a brute observation, or reason, at bottom of our chain of justification, which I would indeed argue is exactly what constitutes, and all that ever could constitute, the rock bottom justification for any reasonable belief. Specifically, that it just seems reasonable.

Many discussions go wrong here by conflating 1) ungrounded assumptions and 2) brute observations, which are by no means the same. For there is clearly a difference between believing that a car just drove by you based on the brute observation, i.e. a conscious sensation of, that a car just drove by you, and then merely assuming, without grounding in any reason or observation, that a car just drove by you.

Or consider another example: the fundamental constants in our physical equations. We ultimately have no deeper justification for the values of these constants than brute observation. Yet this clearly does not render our knowledge of these values merely assumed, much less arbitrarily or unjustifiably chosen. This is not to say that our observations of these values are infallible; future measurements may well yield slightly different, more precise values. Yet they are not arbitrary or unjustified.

The idea that brute observation cannot constitute a reasonable justification for a belief is, along with the idea that brute assumptions and brute observations are the same, a deeply misguided one, in my view. And this is not only true, I contend, of factual matters, but of all matters of reason, including ethics and epistemology, whether we deem these fields strictly factual or not. For instance, my own ethical view (which I have argued is a universal one), according to which suffering is disvaluable and ought to be reduced, does not, on my account, rest on a mere assumption. Rather, it rests on a brute observation of the undeniable intrinsic disvalue of the conscious states we call suffering. I have no deeper justification than this, nor is a deeper one required or even possible.

As I have argued elsewhere, such a foundationalist account is, I submit, the solution to the Münschhausen trilemma.

Deniers of Reason

If reason is the only reasonable starting point, why, then, do so many seem to challenge and reject it? There are a few things to say in response to this. First, those who criticize and argue against reason are not really, as I have argued above, criticizing reason, at least not in the general sense I have defined it here (since to criticize reason is to engage in it). Rather, they are, at most, criticizing a particular conception of reason, and that can of course be perfectly reasonable (I myself would criticize prevalent conceptions of reason as being much too narrow).

Second, there are indeed those who do not criticize reason, and who indeed do reject it, at least in some respects. These are people who refuse to join the conversation Steven Pinker referred to above; people who refuse to provide reasons, and who instead engage in forceful methods, such as silencing or extorting others, violently or otherwise. Examples include people who believe in some political ideology or religion, and who choose to suppress, or indeed kill, those who express views that challenge their own. Yet such actions do not pose a reasonable or compelling challenge to reason, nor can they be considered a reasonable alternative to the endeavor of reason.

As for why people choose to engage in such actions and refuse to engage in reason, one can also say a few things. First of all, the ability to engage in reason seems to require a great deal of learning and discipline, and not all of us are fortunate enough to have received the schooling and discipline required. And even then, even when we do have these things, engaging in reason is still an active choice that we can fail to make.

That is, doing what we find most reasonable is not an automatic, reflexive process, but rather a deliberate volitional one. It is clearly possible, for example, to act against one’s own better judgment. To go with seductive impulse and temptation — e.g. for sex, a cigarette, or social status — rather than what seems most reasonable, even to ourselves in the moment of weakness.

Reason Broadly and Developmentally Construed

The conception of reason I have outlined here is, it should be noted, not a narrow one. It is not committed to any particular ontological position, nor is it purely cerebral, as in restricted to merely weighing verbal or mathematical arguments. Instead, it is open to questioning everything, and takes input from all sources.

Nor would I be tempted to argue that we humans have some single, immutable faculty of reason that is infallible. Quite the contrary. Our assessments of what seems most reasonable in various domains rests on a wide variety of faculties and experiences, virtually none of which are purely innate. Indeed, these faculties, as well as our range of experience, can be continually expanded and developed as we learn more, both individually and collectively.

In this way, reason, as I conceive of it, is not only extremely broad but also extremely open-ended. It is not static, but rather self-regulating and self-updating, as when we realize that our thinking is tendentious and biased in many ways, and that our motives might not be what we (would like to) think they are. In this way, our capacity for reasoning has taught itself that it should be self-skeptical.

Yet this by no means gives way to pure skepticism. After all, our discovery of these tendencies is itself a testament to the power of our capacity to reason. Rather than completely undermine our trust in this capacity, discoveries of this kind simultaneously show both the enormous weakness and strength of our minds: how wrong we can be when we are not careful to try to be reasonable, and how much better informed we can become if we are. Such facts do not comprise a case against employing our capacity to reason, but rather a case for even more, even more careful employments of this capacity of ours.

Conclusion: A Call for Reason

As noted above, the endeavor of reason is not one that we pursue automatically. It takes a deliberate choice. In order to be able to assess and decide what seems most reasonable all things considered, one must first make an active effort to learn as much as one can about the nature of the world, and then consider the implications carefully.

What I have argued here is that there is no reasonable alternative to doing this; not that there is no possible alternative. For one can surely suspend reason and embrace blind faith, as many religious people do, or embrace unreasoned, incoherent, and self-refuting claims about reality, as many postmodernists do. Or one can go with whatever seems most pleasurable in the moment rather than what seems most reasonable all things considered, as we all do all too often. Yet one cannot reasonably choose such a suspension of reason. Indeed, merely not actively denying reason is not enough. The only reasonable choice, it seems, is to consciously choose to pursue the endeavor of reason.

In sum, I would join Aristotle in viewing reason, broadly construed, as our highest calling. That following what seems most reasonable all things considered is the best, most sensible choice before us. And hence that this is a choice we should all actively make.

 

 

The (Non-)Problem of Induction

David Hume claimed that it is:

[…] impossible for us to satisfy ourselves by our reason, why we should extend that experience beyond those particular instances, which have fallen under our observation. We suppose, but are never able to prove, that there must be a resemblance betwixt those objects, of which we have had experience, and those which lie beyond the reach of our discovery.

And this then gives rise to the problem of induction: how can we defend assuming the so-called uniformity of nature that we take to exist when we generalize our limited experience to that which lies “beyond the reach of our discovery”? For instance, how can we justify our belief that the world of tomorrow will, at least in many ways, resemble the world of yesterday? Indeed, how can we justify believing that there will be a tomorrow at all?

A thing worth highlighting in response to this problem is that, even if we were to assume that we have no justification for believing in such uniformity of nature, this would not imply, as may perhaps seem natural to suppose, that we thereby have justification for believing the opposite: that there is no uniformity of nature. After all, to say that the patterns we have observed so far do not predict anything about states and events elsewhere would also amount to a claim about that which lies “beyond the reach of our discovery”, and so this claim seems to face the same problem.

The claims 1) “there is a certain uniformity of nature” and 2) “there is no uniformity of nature” are both hypotheses about the world. And if we look at the limited part of the world about which we do have some knowledge, it is clear that 1) is true about it: patterns at one point in (known parts of) time and space do indeed predict a lot about patterns observed elsewhere.

Does this then mean that the same will hold true of the part of the world that lies beyond the reach of our discovery? One can reasonably argue that we do not have complete certainty that it will (indeed, one can reasonably argue that we should not have complete certainty about any claim our fallible mind happens to entertain). Yet if we reason as scientists — probabilistically, endeavoring to build the picture of the world that seems most plausible in light of all the available evidence — then it does indeed seem justifiable to say that hypothesis 1) seems much more likely to be true of that which lies “beyond the reach of our discovery” than does hypothesis 2) [not least because to say that hypothesis 2) holds true would amount to assuming an extraordinary uniqueness of the observed compared to the unobserved, whereas believing hypothesis 1) merely amounts to not assuming such an extraordinary uniqueness].

And if we think in this way — in terms of competing hypotheses — then Hume’s problem of induction suddenly seems rather vacuous. “You cannot prove that any given hypothesis of this kind is correct.” This seems true (although the fact that we have not found such a proof yet does not imply that one cannot be found), but also quite irrelevant, since a deductive proof is not required in order for us to draw reasonable inferences. To say that we have no purely deductive argument for a given conclusion is not the same as saying that we have no justification for believing it (and if one thinks that it is, then one is also committed to the belief that we have no justification for believing, based on previous experience, that the problem of induction also exists in this very moment; more on this below).

Applying Hume’s Claim to Itself

According to Hume’s quote above, the belief that we can make generalizations based on particular instances can never be “satisfied by our reason”. The problem, however, is that, according to our modern understanding of the world in physical terms, all we ever can generalize from, including when we make deductive inferences, is particular instances — particular spatiotemporally located states and processes found in our brains (equivalently, one could also say that all we can ever generalize from, as knowing subjects, are particular states of our own minds).

Thus, Hume’s statement that we can never prove such generalizations must also apply to itself, as it is itself a general claim based on a particular instance of reasoning taking place in Hume’s head in a particular place and time (indeed, Hume’s claim would appear to pertain to all generalizations).

So what justification could Hume possibly provide for this general claim of his? According to the claim itself, no proof can be given for it. Indeed, if Hume could provide a proof for his claim that it is impossible to find a proof for the validity of generalizations based on particular instances, then he would have falsified his own claim, as such a proof is the very thing that the claim holds not to exist. And such an alleged proof would thereby also undermine itself, as what it supposedly shows is its own non-existence.

This demonstrates that Hume’s claim is unprovable. That is, based on this particular instance of reasoning, we can draw the general conclusion that we will never be able to provide a proof for Hume’s claim. And thereby we have in fact proven Hume’s claim wrong, as we have thus provided a proof for a general claim that also pertains to that which lies beyond the reach of our discovery. Nowhere, neither in the realm of the discovered nor the undiscovered, can a proof for Hume’s claim be found.

So we clearly can prove some general claims about that which lies beyond the reach of our experience based on particular instances (of processes in our brains, say), and hence the claim that we cannot is simply wrong.

 

Yet one may object that this conclusion does not contradict what Hume in fact meant when he claimed that we cannot prove the validity of generalizations based on particular instances, since what he meant was rather that we cannot prove the validity of inductive generalizations such as “we have observed X so far, hence X will also be the case in the next instance/in general” — i.e. generalizations whose generality seems impossible to prove.

The problem, however, is that we can also turn this claim on itself, and indeed turn the problem of induction altogether on itself, as we did in a parenthetical statement above: the mere fact that we have not been able to prove the validity of any inductive claims of this sort so far does not imply that such a proof can never be found. In particular, the claim that we cannot prove the validity of any such inductive claim that seems impossible to prove is itself an inductive claim whose generality seems impossible to prove (i.e. it seems to rest on the argument: “we have not been able to prove the validity of any inductive claim of this nature so far, and hence we cannot[/we will never be able to] prove the validity of such a claim”).

And if we accept that this claim, the very claim that gives rise to the problem of induction, is itself a plausible claim that we have good reason to accept in general (or at least just good reason to believe that it will apply in the next moment), then we indeed do believe that we can have good reason to draw (at least some plausible) non-deductive generalizations based on particular instances, which is the very thing Hume’s argument is often believed to cast doubt upon. In other words, in order to even believe that there is a problem of induction in the first place, one must already assume that which this problem is supposed to question and be a problem for.

Indeed, one can make an argument along these lines that it is in fact impossible to give a coherent argument against (the overwhelming plausibility of at least some degree of) the uniformity of nature. For in order to even state an argument or doubt against it, one is bound to rely thoroughly on the very thing one is trying to question. For instance, that words will still mean the same in the next moment as they did in the previous one; that the argument one thought of in the previous moment still applies in the next one; that the problem one was trying to address in the previous moment still exists in the next; etc.

Thus, it actually seems impossible to reasonably, indeed even coherently, doubt that the world has at least some degree of uniformity, which itself seems to constitute a good argument and reason for believing in such uniformity. After all, that something cannot reasonably be doubted, or indeed doubted at all, usually seems a more than satisfying standard for believing it.

So to reiterate: If one thinks we have good reason to take the problem of induction seriously, or indeed just to believe that this problem still exists in this moment (since it has in previous ones), then one also thinks that we do have good reason to make (at least some plausible) non-deductive generalizations about that which lies “beyond the reach of our discovery” based on particular instances. In other words, if one takes the problem of induction seriously, then one does not take the problem of induction seriously at all.

 

How to then draw the most plausible inferences about that which “lies beyond the reach of our discovery” is, of course, far from trivial. Yet we should be clear that this is a separate matter entirely from whether we can draw such plausible inferences at all. And as I have attempted to argue here, we have absolutely no reason to think that we cannot, and good reason to think that we can.

“The Physical” and Consciousness: One World Conforming to Different Descriptions

My aim in this essay is to briefly explain a crucial aspect of David Pearce‘s physicalist idealist worldview. In particular, I seek to explain how a view can be both “idealist” and “physicalist”, yet still be a “property monist” view.

Pearce himself describes his view in the following way:

“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions […]

So Pearce’s view is a monist, idealist view: reality is fundamentally experiential. And this reality also conforms to description in physical terms. Pearce is careful, however, to distinguish this view from panpsychism, which Pearce, in contrast to his own idealist view, considers a property dualist view:

“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Panpsychism is best treated as a form of property-dualism.

How, one may wonder, is Pearce’s view different from panpsychism, and from property dualist views more generally? This is something I myself have struggled a lot to understand, and inquired him about repeatedly. And my understanding is the following: according to Pearce, there is only consciousness, and its dynamics conform to physical description. Property dualist views, in contrast, view the world as having two properties: the stuff of the world has insentient physical properties to which separate, experiential properties are somehow attached.

Pearce’s view makes no such division. Instead, on Pearce’s view, description in physical terms merely constitutes a particular (phenomenal) mode of description that (phenomenal) reality conforms to. So to the extent there is a dualism here, it is epistemological, not ontological.

The Many Properties of Your Right Ear

For an analogy that might help explain this point better, consider your right ear. What properties does it have? Setting aside the question concerning its intrinsic nature, it is clear that you can model it in various ways. One way is to touch it with your fingers, whereby you model it via your faculties of tactile sensation (or in neuroanatomical terms: with neurons in your parietal lobe). You may also represent your ear via auditory sensations, for example by hitting it and noticing what kind of sound it makes (a sensation mediated by the temporal lobe). Another way, perhaps the clearest and most practical way for beings like us, is to model it in terms of visual experience: to look at your right ear in the mirror, or perhaps simply imagine it, and thereby have a visual sensation that represents it (mediated by the occipital lobe).

[For most of us, these different forms of modeling are almost impossible to keep separate, as our touching our ears automatically induces a visual model of them as well, and vice versa: a visual model of an ear will often be accompanied by a sense of what it would be like to touch it. Yet one can in fact come a surprisingly long way toward being able to “unbind” these sensations with a bit of practice. This meditation is a good exercise in detaching one’s tactile sense of one’s hands from one’s visual model of them. This one goes even further, as it climaxes with a near-total dissolution of our automatic binding of different modes of experience into an ordered whole.]

Now, we may ask: which of these modes of modeling constitute the modeling we call “physical”? And the answer is arguably all of them, as they all relate to the manifestly external (“physical”) world. This is unlike, say, things that are manifestly internal, such as emotions and thoughts, which we do not tend to consider “physical” in this same way, although all our sensations are, of course, equally internal to our mind-brain.

“The physical” is in many ways a poorly defined folk term, and physics itself is not exempt from this ambiguity. For instance, what phenomenal mode does the field of physics draw upon? Well, it is certainly more than just the phenomenology of equations (to the extent this can be considered a separate mode of experience). It also, in close connection with how most of us think about equations, draws heavily on visuospatial modes of experience (I once carefully went through a physics textbook that covered virtually all of undergraduate level physics with the explicit purpose of checking whether it all conformed to such description, and I found that it did). And we can, of course, also describe your right ear in “physics” terms, for instance by measuring and representing its temperature, its spatial coordinates, its topology, etc. This would give us even more models of your right ear.

 

The deeper point here is that the same thing can conform to description in different terms, and the existence of such a multitude of valid descriptions does not imply that the thing described itself has a multitude of intrinsic properties. In fact, none of the modes of modeling an ear mentioned above say anything about the intrinsic properties of the ear; they only relate to its reflection, in the broadest sense.

And this is where some people will object: why believe in any intrinsic properties? Indeed, why believe in anything but the physical, “reflective”, (purportedly) non-phenomenal properties described above?

To me, as well as to David Pearce (and Galen Strawson and many others), this latter claim is self-undermining and senseless; like a person reading from a book who claims that the paper of the book they are reading from does not exist, only the text does. All these modes of modeling mentioned above, including all that we deem knowledge of “the physical” are phenomenal. The science we call “physics” is itself, to the extent it is known by anyone, found in consciousness. It is a particular mode of phenomenal modeling of the world, and thus to deny the existence of the phenomenal is also to deny the existence of our knowledge of “physics”.

Indeed, our knowledge of physics and “the physical” attests to this fact as clearly as it attests to anything: consciousness exists. It is a separate question, then, exactly how the varieties of conscious experience relate to descriptions of the world in physical terms, as well as what the intrinsic nature of the stuff of the world is, to the extent it has any. Yet by all appearances, it seems that minds such as our own conform to physical description in terms of what we recognize as brains, and as with the ear, such a physical description can take many forms: a visual representation of a mind-brain, what it is like to touch a mind-brain, the number of neurons it has, its temperature, etc.

These are different, yet valid ways of describing aspects of our mind-brains. Yet like the descriptions of different aspects of an ear mentioned above, these “physical” descriptions, while all perfectly valid, still do not tell us anything about their intrinsic nature. And according to David Pearce, the intrinsic nature of that which we (validly) describe in physical terms as “your brain” is your conscious mind itself. The apparent multitude of aspects of that which we recognize as “brains” and “ears” are just different modes of conscious modeling of an intrinsically monist, i.e. experiential, reality.

Suffering, Infinity, and Universe Anti-Natalism

Questions that concern infinite amounts of value seem worth spending some time contemplating, even if those questions are of a highly speculative nature. For instance, if we assume a general expected value framework of a kind where we evaluate the expected value of a given outcome based on its probability multiplied by its value, then any more than an infinitesimal probability of an outcome that has infinite value would imply that this outcome has infinite expected value. And hence that the expected value of such an outcome would trump that of any outcome with a “mere” finite amount of value.

Therefore, on this framework, even strongly convinced finitists are not exempt from taking seriously the possibility that infinities, of one ethically relevant kind or another, may be real. For however strong a conviction one may hold, maintaining only an infinitesimal probability that infinite value outcomes of some sort could be real seems difficult to defend.

Bounding the Influence of Expected Value Thinking

It is worth making clear, as a preliminary note, that we may reasonably put a bound on how much weight we give such an expected value framework in our ethical deliberations, so as to avoid crazy conclusions and actions; or simply to preserve our sanity, which may also be a priority for some.

In fact, it is easy to point to good reasons for why we should constrain the influence of such a framework on our decisions. For although it seems implausible to entirely reject such an expected value framework in one’s moral reasoning, it would seem equally implausible to consider such a framework complete and exhaustive in itself. One reason being that thinking in terms of expected value is just one way to theorize about the world among many others, and it seems difficult to justify granting it a particularly privileged status among these, especially given a tool-like conception of our thinking: if all our thinking about the world is best thought of as a tool that helps us navigate in the world rather than a set of Platonic ideals that perfectly track truths in a transcendent way, it seems difficult to elevate a single class of these tools, such as thinking in terms of expected value, to a higher status than all others. But also given that we cannot readily put numbers on most things in practice, both due to a lack of time in most real-world situations and because, even when we do have time, the numbers we assign are often bound to be entirely speculative, if at all meaningful in the first place.

Just as we need more than theoretical physics to navigate in the physical world, it seems likely that we will do well to not only rely on an expected value framework to navigate the moral landscape, and this holds true even if all we care about is to maximize or minimize the realization of a certain class of states. Using only a single style of thinking makes us inherently vulnerable to mistakes in our judgments, and hence resting everything on one style of thinking without limits seems risky and unwise.

It therefore seems reasonable to limit the influence of this framework, and indeed any single framework, and one proposed way of doing so is by giving it only a limited number of the seats of one’s notional moral parliament; say, 40 percent of them. In this way, we should be better able to avoid the vulnerabilities of relying on a single framework, while remaining open to be guided by its inputs.

What Can Be the Case?

To get an overview, let us begin by briefly surveying (at least some of) the landscape of the conceivable possibilities concerning the size of the universe. Or, more precisely, the conceivable possibilities concerning the axiological size of the universe. For it is indeed possible, at least abstractly, for the universe to be physically finite, yet axiologically infinite; for instance, if some states of suffering are infinitely disvaluable, then a universe containing one or more of such states would be axiologically infinite, even if physically finite.

In fact, a finite universe containing such states could be worse, indeed infinitely worse, than even a physically infinite universe containing an infinite amount of suffering, if the states of suffering realized in the finite universe are more disvaluable than the infinitely many states of suffering found in the physically infinite universe. (I myself find the underlying axiological claim here more than plausible: that a single instance of certain states of suffering — torture, say — are more disvaluable than infinitely many instances of milder states of suffering, such as pinpricks.)

It is also conceivable that the universe is physically infinite, yet axiologically finite; if, for instance, our axiology is non-additive, if the universe contains only infinitesimal value throughout, or if only a freak bubble of it contains entities of value. This last option may seem impossibly unlikely, yet it is conceivable. Infinity does not imply infinite repetition; the infinite sequence ( 1, 0, 0, 0, … ) does not logically have to contain 1 again, and indeed doesn’t.

In terms of physical size, there are various ways in which infinity can be realized. For instance, the universe may be both temporally and spatially infinite in terms of its extension. Or it may be temporally bounded while spatially infinite in extension, or vice versa: be spatially finite, yet eternal. It should be noted, though, that these two may be considered equivalent, if we view only points in space and time as having value-bearing potential (arguably the only view consistent with physicalism, ultimately), and view space and time as a four-dimensional structure. Then one of these two universes will have infinite “length” and finite “breadth”, while the opposite is true of the other one, and a similar shape can thus be obtained via “90 degree” rotation.

Similarly, it is also conceivable (and apparently plausible) that the universe has a finite past and an infinite future, in which case it will always have a finite age, or it could have an infinite past and a finite future. Or, equivalently in spatial terms, be bounded in one spatial direction, yet have infinite extension in another.

Yet infinite extension is not the only conceivable way in which physical infinity may conceivably be realized. Indeed, a bounded space can, at least in one sense, contain more elements than an unbounded one, as exemplified by the cardinality of the real numbers in the interval (0, 1) compared to all the natural numbers. So not only might the universe be infinite in terms of extension, but also in terms of its divisibility — i.e. in terms of notional sub-worlds we may encounter as we “zoom down” at smaller scales — which could have far greater significance than infinite extension, at least if we believe we can use cardinality as a meaningful measure of size in concrete reality.

Taking this possibility into consideration as well, we get even more possible combinations — infinitely many, in fact. For example, we can conceive of a universe that is bounded both spatially and temporally, yet which is infinitely divisible. And it can then be infinitely divisible in infinitely many different ways. For instance, it may be divisible in such a way that it has the same cardinality as the natural numbers, i.e. its set of “sub-worlds” is countably infinite, or it could be divisible with the same cardinality as the real numbers, meaning that it consists of uncountably many “sub-worlds”. And given that there is no largest cardinality, we could continue like this ad infinitum.

One way we could try to imagine the notional place of such small worlds in our physical world is by conceiving of them as in some sense existing “below” the Planck scale, each with their own Planck scale below which even more worlds exist, ad infinitum. Many more interesting examples of different kinds of combinations of the possibilities reviewed so far could be mentioned.

Another conceivable, yet supremely speculative, possibility worth contemplating is that the size of the universe is not set in stone, and that it may be up to us/the universe itself to determine whether it will be infinite, and what “kind” of infinity.

Lastly, it is also conceivable that the size of the universe, both in physical and axiological terms, cannot faithfully be conceived of with any concept available to us. So although the conceivable possibilities are infinite, it remains conceivable that none of them are “right” in any meaningful sense.

What Is the Case? — Infinite Uncertainty?

Unfortunately, we do not know whether the universe is infinite or not; or, more generally, which of the possibilities mentioned above that are true of our condition. And there are reasons to think that we will never know with great confidence. For even if we were to somehow encounter a boundary encapsulating our universe, or otherwise find strong reasons for believing in one, how could we possibly exclude that there might not be something beyond that boundary? (Not to mention that the universe might still be infinitely divisible even if bounded.) Or, alternatively, even if we thought we had good reasons to believe that our universe is infinite, how can we be sure that the limited data we base that conclusion on can be generalized to locations arbitrarily far away from us? (This is essentially the problem of induction.)

Yet even if we thought we did know whether the universe is infinite with great confidence, the situation would arguably not be much different. For if we accept the proposition that we should have more than infinitesimal credence in any empirical claim about the world, what is known as Cromwell’s rule (I have argued that this applies to all claims, not just [stereotypically] “empirical” claims), then, on our general expected value framework, it would seem that any claim about the reality of infinite value outcomes should always be taken seriously, regardless of our specific credences in specific physical and axiological models of the universe.

In fact, not only should the conceivable realizations of infinity reviewed above be taken seriously (at least to the extent that they imply outcomes with infinite (dis)value), but so should a seemingly even more outrageous notion, namely that infinite (dis)value may rest on any particular action we do. However small a non-zero real-valued probability we assign such a claim — e.g. that the way you prepare your coffee tomorrow morning is going to impact an infinite amount of value — the expected value of getting the, indeed any, given action right remains infinite.

How should we act in light of this outrageous possibility?

Pascallian and Counter-Pascallian Claims

The problem, or perhaps our good fortune, is that, in most cases arguably, we do not seem to have reason to believe that one course of action is more likely to have an infinitely better outcome than another. For example, in the case of the morning coffee, we appear to have no more reason to believe that, say, making a strong cup of coffee will lead to infinitely more disvalue than making a mild one will, rather than it being the other way around. For such hypotheses, we seem able to construct an equal and oppositely directed counter-hypothesis.

Yet even if we concede that this is the case most of the time, what about situations where this is not the case? What about choices where we do have slightly better reasons to believe that one outcome will be infinitely better than another one?

This is difficult to address in the absence of any concrete hypotheses or scenarios, so I shall here consider the two specific cases, or classes of scenarios, where a plausible reason may be given in favor of thinking that one course of action will influence infinitely more value than another. One is the case of an eternal civilization: our actions may impact infinite (dis)value by impacting whether, and in what form, an eternal civilization will exist in our universe.

In relation to the (extremely unlikely) prospect of the existence of such a civilization, it seems that we could well find reasons to believe that we can impact an infinite amount of value. But the crucial question is: how? From the perspective of negative utilitarianism, it is far from clear what outcomes are most likely to be infinitely better than others. This is especially true in light of the other class of ways in which we may plausibly impact infinite value that I shall consider here, namely by impacting the creation of, or the unfolding of events in, parallel universes, which may eventually be infinitely numerous.

For not only could an eternal civilization that is the descendant of ours be better in “our universe” than another eternal civilization that may emerge in our place if we go extinct; it could also be better with respect to its effects on the creation of parallel universes, in which case it may be normative for negative utilitarians to work to preserve our civilization, contrary to what is commonly considered the ultimate corollary of negative utilitarianism (and this could also hold true if the temporal extension of our civilization is bound to be finite). Indeed, this could be the case even if no other civilization were to emerge instead of ours: if the impact our civilization will have on other universes results in less suffering than what would otherwise be created naturally. It is, of course, also likely that the opposite is the case: that the continuation of our civilization would be worse than another civilization or no civilization. And I must admit that I have no idea what is more likely to be the case.

So in these cases where reasons pointing more in one way than another plausibly could be found, it is not clear which direction that would be. Except perhaps in the direction that we should do more research on this question: which actions are more likely to reduce infinitely more suffering than others? Indeed, from the point of view of a suffering-focused expected value framework, it would seem that this should be our highest priority.

Ignoring Small Credences?

One may be skeptical of my claim above: can it really be true that the considerations, or at least my considerations, in the case of the continuation of civilization cancel out exactly? Is there not even the smallest difference? Not even a hunch?

In his paper on infinite ethics, Nick Bostrom argues that such an exact cancellation seems extraordinarily unlikely, and that small tips in balance seem to have counter-intuitive, if not catastrophic, consequences:

This cancellation of probabilities would have to be perfectly accurate, down to the nineteenth decimal place and beyond. […]

It would seem almost miraculous if these motley factors, which could be subjectively correlated with infinite outcomes, always managed to conspire to cancel each other out without remainder. Yet if there is a remainder—if the balance of epistemic probability happens to tip ever so slightly in one direction—then the problem of fanaticism remains with undiminished force. Worse, its force might even be increased in this situation, for if what tilts the balance in favor of a seemingly fanatical course of action is the merest hunch rather than any solid conviction, then it is so much more counterintuitive to claim that we ought to pursue it in spite of any finite sacrifice doing so may entail. The “exact-cancellation” argument threatens to backfire catastrophically.

I do not happen to share Bostrom view, however. Apart from the aforementioned bounding of the influence of expected value thinking, there are also ways to avoid such apparent craziness of letting our actions rest on the slightest hunch from within the expected value framework: disregarding sufficiently low credences.

Bostrom is skeptical of this approach:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria. The postulation of an exception from these criteria for very low-likelihood events is, at the very least, theoretically ugly.

Yet Bostrom here seems to ignore that “the value in question” is infinite for every action, cf. the point that we should maintain some small credence in every claim, including the claim that any given action may effect an infinite amount of (dis)value.

So in this way, no action we can point toward is fundamentally different from any other. The only difference is just what our credence is that a particular action may make “an infinite difference”, or that it makes “the greatest infinite difference”, compared to any other action. And when it comes to such credences, I would argue that it is utmost reasonable to ignore sufficiently small ones. In my view, to not do that would be the ugly thing, for the following reasons:

First, one could argue that, just as most models of physics break down beyond a certain range, it is reasonable to expect our ability to discriminate between different credence levels to break down when we reach a sufficiently fine scale. This is also well in line with the fact that it is generally difficult to put precise numbers on our credence levels with respect to specific claims. Thus, one could argue that we are way past the range of error of our intuitive credences when we reach the nineteenth decimal place.

This conclusion can also be reached via a rather different consideration: one can argue that our entire ontological and epistemological framework itself cannot be assumed credible with absolute certainty. Therefore, it would seem that our entire worldview, including this framework of assigning numerical values, or indeed any order at all, to our credences, should itself be assigned some credence of being wrong. And one can then argue, quite reasonably, that once we reach a level of credence in any claim that is lower than our level of credence in, say, the meaningfulness of ascribing credences in this way in the first place, this specific credence should be ignored, as it lies beyond what we consider the range of reliability of this framework in the first place.

In sum, I think it is fair to say that, when we only have a tiny credence that some action may be infinitely better than another, we should do more research and look for better reasons to act on rather than to act on these hunches. We can reasonably ignore exceptionally small credences in practice, as we indeed already do every time we make a decision based on calculations of finite expected values; we then ignore the tiny credence we should have that the value of the outcomes in question is infinite.

Infinitarian Paralyses?

Another thing Bostrom treats in his paper, actually the main subject of it, is whether the existence of infinite value implies, on aggregative consequentialist views, that it makes no difference what we do. As he puts it:

Aggregative consequentialist theories are threatened by infinitarian paralysis: they seem to imply that if the world is canonically infinite then it is always ethically indifferent what we do. In particular, they would imply that it is ethically indifferent whether we cause another holocaust or prevent one from occurring. If any non-contradictory normative implication is a reductio ad absurdum, this one is.

To elaborate a bit: the reason it is supposed to be indifferent whether we cause another holocaust is that the net sum of value in the universe supposedly is the same either way: infinite.

It should be noted, though, that whether this really is a problem depends on how we define and calculate the “sum of value”. And the question is then whether we can define this in a meaningful way that avoids absurdities and provides us with a useful ethical framework we can act on.

In my view, the solution to this conundrum is to give up our attachment to cardinal arithmetic. In a way, this is obvious: if you have an infinite set and add finitely many elements to it, you still have “the same as before”, in terms of the cardinality of the set. Yet, in another sense, we of course do not get “the same as before”, in that the new infinite set is not identical to the one we had before. Therefore, if we insist that adding another holocaust to a universe that already contains infinitely many holocausts should make a difference, we are simply forced to abandon standard cardinal arithmetic. In its stead, we should arguably just take our requirement as an axiom: that adding any amount of value to an infinity of value does make a difference — that it does change the “sum of value”.

This may seem simplistic, and one may reasonably ask how this “sum of value” could be defined. A simple answer is that we could add up whatever (presumably) finite difference we make within the larger (hypothetically) infinite world, and to then consider that the relevant sum of value that should determine our actions, what has been referred to as “the causal approach” to this problem.

This approach has been met with various criticisms, one of them being that it leaves “the total sum of value” unchanged. As Bostrom puts it:

One consequence of the causal approach is that there are cases in which you ought to do something, and ought to not do something else, even though you are certain that neither action would have any effect at all on the total value of the world.

I fail to see the appeal of this criticism, however, not least because it is deceptively phrased. For how is the “total value of the world” defined here? It is not the case that “the total value of the world” is left unchanged on every possible definition of these terms; it just is on one particular definition, indeed one we have good reason to consider implausible and irrelevant. And the reason was that it implies that adding another holocaust makes no difference to the “total value of the world”. It then seems a strange move to say that it counts against a theory that it holds the prevention of finitely many holocaust to be normative because this has no “effect at all on the total value of the world” — by this very implausible definition. If forced to choose between these two mutually exclusive starting points — adding a holocaust makes a difference to the total value of the world or it does not — I think it is an easy choice. If we can help alleviate the extreme suffering of just a single being, while keeping all else equal, this being will hardly agree that “the total value of the world” was left unchanged by our actions. Not in any sensible sense.

More than that, I also think that for an ethical theory to say that we should ignore whatever lies outside our sphere of influence should not be considered a weakness, but rather a strength. Imagine by analogy a hypothetical Earth identical to ours, with the two exceptions that 1) it has been inhabited by humans for an eternal and unalterable past, over which infinitely many holocausts have taken place, and 2) it has a finite future; the universe it inhabits will end peacefully in a hundred years. Now, if people on this Earth held an ethical theory that does not take this unalterable infinite past into account, and instead focuses on the finite future, including preventing holocausts from happening in that future, would this count against that theory in any way? I fail to see how it could, and yet this is essentially the same as taking the causal approach within an infinite universe, only phrased more “unilaterally”, i.e. more purely in temporal rather than spatio-temporal terms.

Another criticism that has been leveled against the causal approach is that we cannot rule out that our causal impact may in some sense be infinite, and therefore it is problematic to say that we should just measure the world’s value, and take action based on, whatever finite difference we make. Here is Bostrom again:

When a finite positive probability is assigned to scenarios in which it is possible for us to exert a causal effect on an infinite number of value-bearing locations […] then the expectation value of the causal changes that we can make is undefined. Paralysis will thus strike even when the domain of aggregation is restricted to our causal sphere of influence.

Yet these claims actually do not follow. First, it should again be noted that the situation Bostrom refers to here is in fact the situation we are always in: we should always assign a positive probability to the possibility that we may effect infinite (dis)value. Second, we should be clear that the scenario where we can impact an infinite amount of value, and where we aggregate over the realm we can influence, is fundamentally different from the scenario in which we aggregate over an infinite universe that contains an infinite amount of value that we cannot impact. To the extent there are threats of “infinitarian paralysis” in these two scenarios, they are not identical.

For example, Bostrom’s claim that “the expectation value of the causal changes that we can make is undefined” need not be true even on standard cardinal arithmetic, at least in the abstract (i.e. if we ignore Cromwell’s rule), in the scenario where we focus only on our own future light cone. For it could be that the scenarios in which we can “exert a causal effect on an infinite number of value-bearing locations” were all scenarios that nonetheless contained only finite (dis)value, or, on a dipolar axiology, only a finite amount of disvalue and an infinite amount of value. A concrete example of the latter could be a scenario where the abolitionist project outlined by David Pearce is completed in an eternal civilization after a finite amount of time.

Hence, it is not necessarily the case that “paralysis will strike even when the domain of aggregation is restricted to our causal sphere of influence”, apart from in the sense treated earlier, when we factor in Cromwell’s rule: how should we act given that all actions may effect infinite (dis)value? But again, this is a very different kind of “paralysis” than the one that appears to be Bostrom’s primary concern, cf. this excerpt from the abstract of his paper Infinite Ethics:

Modern cosmology teaches that the world might well contain an infinite number of happy and sad people and other candidate value-bearing locations. Aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. You can affect only a finite amount of good or bad. In standard cardinal arithmetic, an infinite quantity is unchanged by the addition or subtraction of any finite quantity.

Indeed, one can argue that the “Cromwell paralysis” in a sense negates this latter paralysis, as it implies that it may not be true that we can affect only a finite amount of good or bad, and, more generally, that we should assign a non-zero probability to the claim that we can optimize the value of the universe everywhere throughout, including in those corners that seem theoretically inaccessible.

Adding Always Makes a Difference

As for the infinitarian paralysis supposed to threaten the causal approach in the absence of the “Cromwell paralysis” — how to compare the outcomes we can impact that contain infinite amounts of value? — it seems that we can readily identify reasonable consequentialist principles to act by that should at least allow us to compare some actions and outcomes against each other, including, perhaps, the most relevant ones.

One such principle is the one alluded to in the previous section: that adding something of (dis)value always makes a difference, even if the notional set we are adding it to contains infinitely many similar elements already. In terms of an axiology that holds the amount of suffering in the world to be the chief measure of value, this principle would hold that adding/failing to prevent an instance of suffering always makes for a less valuable outcome, provided that other things are equal, which they of course never quite are in the real world, yet they often are in expectation.

The following abstract example makes, I believe, a strong case for favoring such a measure of (dis)value over the cardinal sum of the units of (dis)value. As I formulate this thought experiment, this unit will, in accordance with my own view, be instances of intense suffering in the universe, yet the point applies generally:

Imagine that we have a universe with a countably infinite amount of instances of intense suffering. We may visualize this universe as a unit ball. Now imagine that we perform an act in this universe that leaves the original universe unchanged, yet creates a new universe identical to the first one. The result is a new universe full of suffering. Imagine next that we perform this same act in a world where nothing exists. The result is exactly the same: the creation of a new universe full of suffering, in the exact same amount. In both cases, we have added exactly the same ball of infinite suffering. Yet on standard cardinal arithmetic, the difference the act makes in terms of the sum of instances of suffering is not the same in the two cases. In the first case, the total sum is the same, namely countably infinite, while there is an infinite difference in the second case: from zero to infinity. If we only count the difference added, however— the “delta universe”, so to speak— the acts are equally disvaluable in the two cases. The latter method of evaluating the (dis)value of the act seems far more plausible than does evaluation based on the cardinal sum of the units of (dis)value in the universe. It is, after all, the exact same act.

This is not an idle thought experiment. As noted above, impacting the creation of new universes is one of the ways in which we may plausibly be able to influence an infinite amount of (dis)value. Arguably even the most plausible one. Admittedly, it does rest on certain debatable assumptions about physics, yet these assumptions seem significantly more likely than does the possibility of the existence of an eternal civilization. For even disregarding specific civilization hostile facts about the universe (e.g. the end of stars and a rapid expansion of space that is thought to eventually rip ordinary matter apart), we should, for each year in the future, assign a probability strictly lower than 1 that civilization will go extinct that year, which means that the probability of extinction will be arbitrarily close to 1 within a finite amount of time.

In other words, an eternal civilization seems immensely unlikely, even if the universe were to stay perfectly life-friendly forever. The same does not seem true of the prospect of influencing the generation of new universes. As far as I can tell, the latter is in a ballpark of its own when it comes to plausible ways in which we may be able to effect infinite (dis)value, which is not to say that universe creation is more likely than not to become possible, but merely that it seems significantly more likely than other ways we know of in which we could effect infinite (dis)value (though, again, our knowledge of “such ways” is admittedly limited at this point, and something we should probably do more research on). Not only that, it is also something that could be relevant in the relatively near future, and more disvalue could depend on a single such near-future act of universe creation than what is found, intrinsically at least, in the entire future of our civilization. Infinitely more, in fact. Thus, one could argue that it is not our impact on the quality of life of future generations in our civilization that matters most in expectation, but our impact on the generation of universes by our civilization.

Universe Anti-Natalism: The Most Important Cause?

It is therefore not unthinkable that this should be the main question of concern for consequentialists: how does this impact the creation of new universes? Or, similarly, that trying to impact future universe generation should be the main cause for aspiring effective altruists. And I would argue that the form this cause should take is universe anti-natalism: avoiding, or minimizing, the creation of new universes.

There are countless ways to argue for this. As Brian Tomasik notes, creating a new universe that in turn gives rise to infinitely many universes “would cause infinitely many additional instances of the Holocaust, infinitely many acts of torture, and worse. Creating lab universes would be very bad according to several ethical views.”

Such universe creation would obviously be wrong from the stance of negative utilitarianism, as well as from similar suffering-focused views. It would also be wrong according to what is known as The Asymmetry in population ethics: that creating beings with bad lives is wrong, and something we have an obligation to not do, while failing to create happy lives is not wrong, and we have no obligation to bring such lives into being. A much weaker, and even less controversial, stance on procreative ethics could also be used: do not create lives with infinite amounts of torture.

Indeed, how, we must ask ourselves, could a benevolent being justify bringing so much suffering into being? What could possibly justify the Holocaust, let alone infinitely many of them? What would be our answer to the screams of “why” to the heavens from the torture victims?

Universe anti-natalism should also be taken seriously by classical utilitarians, as a case can be made that the universe is likely to end up being net negative in terms of algo-hedonic tone. For instance, it may well be that most sentient life that will ever exist will find itself in a state of natural carnage, as civilizations may be rare even on planets where sentient life has emerged, and because even where civilizations have emerged, it may be that they are unlikely to be sustainable, perhaps overwhelmingly so, implying that most sentient life might be expected to exist at the stage it has existed on for the entire history of sentient life on Earth. A stage where sentient beings are born in great numbers only for the vast majority of them to die shortly thereafter, for instance due to starvation or by being eaten alive, which is most likely a net negative condition, even by wishful classical utilitarian standards. Simon Knutsson’s essay How Could an Empty World Be Better than a Populated One? is worth reading in this context, and of course applies to “no world” as well.

And if one takes a so-called meta-normative approach, where one decides by averaging over various ethical theories, one could argue that the case against universe creation becomes significantly stronger; if one for instance combines an unclear or negative-leaning verdict from a classical utilitarian stance with The Asymmetry and Kantian ethics.

As for those who hold anti-natalism at the core of their values, one could argue that they should make universe anti-natalism their main focus over human anti-natalism (which may not even reduce suffering in expectation), or at the very least expand their focus to also encompass this, apparently esoteric position. Not only because the scale is potentially unsurpassable in terms of what prevents the most births, but it may also be easier, both because wishful thinking about “those horrors will not befall my creation” could be more difficult to maintain in the face of horrors that we know have occurred in the past, and because we do not seem as attached and adapted, biologically and culturally, to creating new universes as we are to creating new children. And just as anti-natalists argue with respect to human life, being against the creation of new universes need not be incompatible with a responsible sustainment of life in the one that does exist. This might also be a compromise solution that many people would be able to agree on.

Are Other Things Equal?

The discussion above assumes that the generation of a new universe would leave all else equal, or at least leave all else merely “finitely altered”. But how can we be sure that the generation of a new universe would not in fact prevent the emergence of another? Or perhaps even prevent many infinite universes from emerging? We can’t. Yet we do not appear to have any reason for believing that this is the case. As noted above, all else will often be equal in expectation, and that also seems true in this case. We can make counter-Pascallian hypotheses in both directions, and in the absence of evidence for any of them, we appear to have most reason to believe that the creation of a new universe results, in the aggregate, in a net addition of a new universe. But this could of course be wrong.

For instance, artificial universe creation would be dwarfed by the natural universe generation that happens all the time according to inflationary models, so could it not be that the generation of a new universe might prevent some of these natural ones from occurring? I doubt that there are compelling reasons for believing this, but natural universe generation does raise the interesting question of whether we might be able to reduce the rate of this generation. Brian Tomasik has discussed the idea, yet it remains an open, and virtually unexplored, research question. One that could dominate all other considerations.

It may be objected that considerations of identical, or virtually identical, copies of ourselves throughout the universe have been omitted in this discussion, yet as far as I can tell, including such considerations would not change the discussion in a fundamental way. For if universe generation is the main cause and most consequential action to focus on for us, more important even than the intrinsic importance of the entire future of our civilization, then this presumably applies to each copy of ourselves as well. Yet I am curious to hear arguments that suggest otherwise.

A final miscellaneous point I should like to add here is that the points made above may apply even if the universe is, and only ever will be, finite, as the generation of a new finite pocket universe in that case still could bring about far more suffering than what is found in the future light cone of our own universe.

Implications for Artificial Intelligence in Brief

The prospect of universe generation, and the fact that it may dominate everything else, also seems to have significant implications for our focus on the future of artificial intelligence, one of them being, as hinted above, that altruists should perhaps not focus on artificial intelligence as their main cause (and why we should be careful about claiming that it is clear that they should, as we may thereby risk overlooking crucial considerations). For instance, if artificial intelligence is sufficiently unlikely to ever “take over” in the way that is often feared, or if focusing directly on researching or arguing against universe generation has higher expected value.

Moreover, it suggests that, to the extent altruists indeed should focus primarily on artificial intelligence, this would be to the extent that artificial intelligence will determine the rate of universe generation in the universe. This might be the main thing to focus on when implementing “Fail-Safe” measures in artificial intelligence, or in any kind of future civilization, to the extent implementation of such measures is feasible.

 

In conclusion, the subjects of the potential to effect infinite (dis)value in general, and of impacting universe generation in particular, are extremely neglected at this point, and a case can be made that more research into such possibilities should be our top priority. It seems conceivable that a question related to such a prospect — e.g. should we create more universes? — will one day be the main ethical question facing our civilization, perhaps even one we will be forced to decide upon in a not too distant future. Given the potentially enormous stakes, it seems worth being prepared for such scenarios — including knowing more about their nature, how likely they are, and how to best act in them — even if they are unlikely.

Induction Is All We Got

In this piece I shall defend what may appear an unusual thesis, namely that all reasoning is ultimately based on induction, and hence that induction is the only way in which we ever know anything. By induction, I here mean what seems right in light of the doubtable data/experience we have accumulated so far. In everything from logic and mathematics to philosophy and psychology, this is invariably how we evaluate what is true. Or so I shall argue.

How can we be sure that the patterns we have reliably observed in the world so far will also exist in other times or places? How can we justify the assumed uniformity of the world that induction seems to rest upon? How can we trust induction when it cannot be deductively justified? This is the problem of induction in a nutshell.

What is interesting, however, and seemingly universally missed, is that exactly the same problem is staring us in the face when it comes to deduction. Logical deductions are also part of the world, and hence to assume that they will be valid in all times and in all realms is therefore also to assume that the world is uniform in certain ways. It is the exact same assumption, so why is it considered problematic in the case of induction but not in the case of deduction? What is the source of this discrimination?

The answer, I think, is that it just seems true that deduction is universal, and that the opposite claim — that logic is not universal — seems to make no sense. I certainly share this impression, but this does not render deduction wholly undoubtable. We may reasonably have confidence in the statement that logical deductions are universal, but we should be clear that the basis of this belief is itself merely that it seems reasonable to suppose this given that our minds apparently cannot make sense of anything else. More than that, we should also be clear that we then in fact do accept the uniformity of the world (or perhaps assign a high probability to this claim being true), and that we do it on the basis that it just seems reasonable.

Another aspect of the problem of induction is that induction merely is assumed to be valid, and that attempts at justifying it always seem circular. Yet again, how does deduction compare? How do we justify deduction? With deductive arguments? That would be circular as well. With brute assumptions? If so, why is it more problematic to assume the validity of induction?

There really is no fundamental distinction. We accept both induction and deduction because they seem right. Deductions seem obviously reasonable and valid while inductive inferences seem fairly reasonable and probably valid. The only difference, it seems, is the degree of obviousness, a difference I shall try to explain below.

Beliefs: All in Memory

One way to realize the conclusion sketched out above is by recalling the fact that all our beliefs reside in memory. And we know that 1) our memory consists of information we have gathered over time, and 2) our memories can be unreliable. There is nothing logically problematic about this; indeed, this is common knowledge. Yet it implies something rather significant, namely that all our beliefs, including those about logic, are doubtable, and that all our beliefs are a matter of what seems right in light of the doubtable data/experience we have accumulated so far.

This applies to all knowledge, whether inductively or deductively inferred (as we shall see, the latter is a subset of the former). Mathematical proofs, for instance, are often claimed to be certain knowledge, yet our knowledge of mathematical proofs is also contained in memory. And since all mathematical proofs we know of are stored in memory, and since memory is fallible, it follows that our belief in any mathematical proof we hold to be valid is, in fact, fallible.

The idea that mathematical knowledge is certain and rests only on deduction is indeed ridiculous. Take for instance the proof of Fermat’s Last Theorem: a small fraction of professional mathematicians actually fully understand this proof, yet in my experience, virtually all mathematicians will say that we know that Fermat’s Last Theorem is true. And this is probably a highly reasonable belief, but let us be clear about how we know it: by trusting the expertise of other mathematicians. And such trust is transparently based on induction; it is not based on deduction. More than that, we know, inductively, that this inductively based trust is fallible.

A famous example would be Alfred Kempe’s proof of the four-color theorem, presented in 1879, which was widely accepted until it was shown to be incorrect in 1890. Another example is Gauss’ proof of the fundamental theorem of algebra, a proof Gauss himself obviously held to be valid, as did many other mathematicians, yet it was not completed until more than a hundred years after Gauss first published it.

So our mathematical knowledge clearly relies strongly on induction, in that we trust others. Indeed, I would argue that, in practice, a majority of the mathematical knowledge any mathematician knows is known based on such trust in others rather than their own deductions. Yet to think that we rely on induction merely when it comes to trusting others in the pursuit of what we call deductive knowledge is to miss the point. For the point is that this applies to all mathematical knowledge, including when we have made all the deductions ourselves. There is no fundamental distinction between when others have made the deductions and when we have made them ourselves. In both cases, we trust conclusions made by fallible minds, stored in a fallible memory.

This of course isn’t to say that such trust is unreasonable, yet the nature of this trust should not be missed: it rests on induction. There is no deductive argument that proves our memory to be reliable. Rather, we merely assume the reliability of memory, and 1) this is an assumption that we cannot not make, 2) it is an assumption that all deduction, indeed all knowledge in general, rests upon, and 3), to repeat the point made above, this assumption rests on induction.

Let me explain and justify all these claims in turn. To start with 3), to assume that our memories in this present moment are valid rests on the assumption that the information we have stored in memory earlier still applies. This projected extension of the limited information we know is the core of induction. As for 2), it is trivial that all knowledge, including that derived from deduction, rests on the reliability of memory, since that is where all our knowledge is stored. So to say that we know anything about anything is to assume the validity of our memory — or at least the validity of some aspects of it; more on this below. Lastly, 1), the assumption that we can trust our memory is an assumption we cannot not make because our memory is the position from which we see the world. To even doubt this assumption requires trusting it, since one must then at least trust that one doubts.

Yet we know our memory to be profoundly unreliable, don’t we?”

Yes, but it is not entirely so, and that is the point. For in order to even discover that our memory is not (entirely) reliable, we must assume that at least some aspects of our memory are — at the very least those aspects of it that hint that our memory is not entirely reliable. In other words, the discovery of the imperfect reliability of memory rests on its partial reliability.

So believing that we cannot trust any aspect of our own memory is nothing less than logically impossible, since such a belief — indeed any belief — itself resides in memory, and thus rests on its (at least partial) reliability. And given this status of logical impossibility, the belief that we cannot trust any aspect of our memory must be considered false with at least the same certainty that we place in other logical conclusions. Indeed, if possible, it should be granted even higher status, since all other beliefs, including purely logical ones, rest upon its falsity, namely that we can trust (at least some aspects of) our memory. That’s right: all deductive knowledge rests on the reliability of memory, and this reliability rests on the validity of induction [again, this was 3) above]. Conclusion: Deductive knowledge rests on the validity of induction.

Indeed, the reason we trust deduction is ultimately inductive. For deductions are also, I would argue, experiments that we run in our heads, albeit experiments that reliably produce the same result. We therefore inductively conclude that they will keep on doing the same. What we usually consider matters of induction — for instance, we have observed a thousand white swans; should we expect the next swan to be white given all that we know about the world, including the fact that there are other birds who are not white? — is just different in that we are in a realm where our information seems a lot more incomplete. It is ultimately of the same form.

This also explains the difference in the status of certainty we ascribe to deduction and induction mentioned above: deduction seems obviously reasonable and valid because the experiment goes right every time, as far as we can tell, while (what we usually call) induction seems fairly reasonable and probably valid because it works well most of the time.

So the reason, I believe, that Hume found deduction more valid than induction, and found induction so much more problematic, was, ironically, because induction recommends the former more strongly. Hume’s objection to induction is really an adventure in self-contradiction — in many ways. For instance, the great man claimed, based on his own brain’s reasoning, that a universal rule cannot be derived from particular instances, yet what is this if not itself a universal rule derived from particular instances (of reasoning in his brain)? What is this if not a glaring self-contradiction?

Try as you might, in the realm of belief, there simply is no denying the validity of induction. Again, in order to even express doubts about the validity of induction, one must inescapably rest on what one is trying to doubt, as one then inductively assumes that doubt is a meaningful concept in this moment (it has been so far), that the others whom one expresses one’s doubts to will understand a word of what one says (they have so far), that there still is a problem of induction (it seems there has been so far), etc. Indeed, all beliefs rest on induction, as they rest on the assumption that the justification we have acquired for them in the past still applies in the present, including belief in notions of past, present, and future in the first place, not to mention (tacit) belief in there being such a thing as logic, truth, and falsehood — the ideas that constitute the entire framework in which discussions about induction occur.

So what justifies induction, then?”

Nothing. In order to even enter the realm of trying to justify something, we have already accepted induction. In asking for a justification for induction, we ask from a position of unacknowledged acceptance of it. Indeed, what justifies the belief that there is a need to justify induction — a belief that itself rests on induction? Nothing. If we believe anything at all, we are already way past the point of accepting induction, knowingly or not. So to the extent we admit of having any beliefs at all, we admit of the validity of induction. We are fundamentally confused about where in our hierarchy of beliefs induction enters the picture. The answer is: underneath it all.

Knowing Good from Bad Induction

To say that reliance on induction is inevitable is obviously not to say that all inductive inferences are valid. So how do we know valid inductive inferences from invalid ones? Via induction, of course.

In a nutshell, we (ideally) assess the truth of a statement in light of all the information we have in our memory — the totality of what we know. This is all we got, and hence all we ever can evaluate truth claims based on. The more the doubtable data points we have accumulated point our beliefs in a certain direction, the stronger those beliefs are, or at least should be.

For example, the claim that the sun will rise tomorrow is a claim that we believe because it fits with, indeed is predicted by, everything we know, from the totality of humanity’s knowledge of physics and astronomy to our everyday experience.

In the same way, we can deem inductive inferences false. For instance, the claim that the sun will always keep rising because it has done that so far is obviously not true, and the way we know this is again via induction: we know of underlying physical principles that “govern” the physical macro patterns that are the dynamics of stars and planets, and these principles, along with astronomical observations of stars elsewhere, imply that the age of our solar system will indeed be finite. That is what all the data points to.

The commonly cited examples of “hard problems” for our (inevitably) inductive reasoning are all problems that arise from paying attention to a too narrow channel of information. For instance, when we say that every swan we have ever seen is white, and therefore all swans must be white, this is simply a bad inference that fails to keep other relevant facts in view, such as the size of our sample, the size of the Earth, and the fact that there are other birds who have a different color, a fact that is relevant when we keep in mind the additional fact that there is a high degree of similarity in patterns across species.

But what if we did not know about these additional facts? Then the inference seems reasonable.”

First, it should be noted that if we were in that position, we would be ignorant to a degree that is hard for us to imagine as creatures who know a lot. Second, if we were in such a position of knowing virtually nothing, we indeed should be very careful to make general conclusions about the world with confidence. If you have seen a thousand swans, and they have all been white, it seems reasonable to expect that the next one you see will be white as well, but it by no means implies that all swans are.

But couldn’t our inductive reasoning be wrong, even when we know a lot and we consider the totality of what we know?”

This is possible, yet, as we know inductively, e.g. from statistics, the more we know, the less likely such mistakes are. It is also worth noting how we know of the possibility of the fallibility of inductive inferences in the first place, namely via induction. We know that apparently solid patterns can break because we have witnessed it before. Nations that seemed strong suddenly fell, people who were right about many things were suddenly wrong, proofs that seemed valid were shown not to be, etc. We have observed this meta pattern of patterns sometimes breaking when we don’t expect it, which has taught us, inductively, to be more open-minded about the possibility of the breaking of even apparently solid patterns. It is always induction that teaches us epistemic modesty.

So it is due to inductive reasoning, not in spite of it, that we seem to have some reason to be agnostic concerning the generality of patterns we consider general, such as whether the cosmos looks the same everywhere across time and space — a question that is currently debated among physicists and cosmologists. What we can say here seems much like what we could say as the ignorant swan observers we imagined ourselves to be above: it seems reasonable that the time and space in the proximity of that which we have observed to unfold in certain law-like ways will also unfold in such ways, but we cannot confidently claim that this applies to all time and space.

The Source of the Problem: A Narrow and Confused View of Knowledge

As mentioned above, a narrow focus on certain data and beliefs about the world, as opposed to a focus on the totality of what we know, is the source of many problems in epistemology, including Goodman’s new riddle of induction and the traditional problem of induction itself. In the case of Goodman’s new riddle of induction, the problem is, in a nutshell, that we have no reason to believe that properties such as grue and bleen exist in light of all that we know about physics, as their existence would essentially require a change in the laws of physics that we have no reason to believe possible. So it is not the case that these two hypothetical properties constitute a deep problem for induction; the suggestion that things could be grue or bleen merely constitutes an extremely unlikely hypothesis about the world.

As for the problem of induction itself, a narrow focus is also to blame. Hume made the following claim: “That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.” Yet that this proposition “implies no more contradiction” is simply wrong, since it contradicts pretty much everything we know in fields such as astronomy and physics. And if you can contradict all this, why not also contradict history and claim that there never was a guy named David Hume, and that nobody has ever raised any so-called problem of induction? After all, this is certainly “no less intelligible” or plausible than the claim that the sun will not rise tomorrow. Or to take a more traditional inductive problem: why believe that there is any problem of induction in this moment or the next one just because it seems that there has been in the past? Indeed, why not contradict logical conclusions themselves?

This is surely what Hume means: the claim that the sun will not rise tomorrow seems to imply no logical contradiction, yet this dichotomy between logical and physical knowledge is, I would argue, ultimately misguided. First, in ontological terms, there is no evidence for the existence of some separate logico-mathematical world apart from the physical one — mathematical truths are found in and by the human mind, and given that the human mind is physical, it follows that mathematical truths are found in and by the physical. Second, as mentioned above, in epistemological terms, both what we consider mathematical and physical forms of knowledge ultimately share the same inductive basis — they are stored in our memory based on what we have experienced — which is yet another reason not to strongly privilege one over another, as Hume does. In sum, there is no justification for Hume’s narrow focus on, and privileging of, deductive reasoning and knowledge — his belief that only (what we categorize as) logical truths are valid. Again, deductively based beliefs, like all other beliefs, also rest on induction in the first place.

How We Know Things: It Just Seems That Way

How do we know that we are conscious, or that two plus two equals four? The answer, I would argue, is simply that it appears clear from our experience that this is the case. We ultimately have no deeper justification than this.

And this answer actually does not change when we ask more complicated questions, such as how we know that the Earth is round, or what the name of the current president of the United States is. We know because of experiences that have shaped, and in significant ways are now part of, our present experience from which it just seems obvious what the answer is. We may be able to express a long chain of reasons that compel us to hold the belief we hold, yet at the bottom of this elaborate chain, all we ultimately have is a set of conscious impressions of belief. Or doubt, for that matter, if we don’t happen to know the answer, but the basic mechanics are the same: we weigh our experience and read off from it what our state of belief — or doubt — is; itself a fact about the world.

Every chain of explanations must end somewhere, and, when it comes to our knowledge, the rock bottom of this chain is found in our direct conscious sensations. Ultimately, we do not have a deeper justification for what we know than this: it seems that way from our conscious impressions. This form of foundationalism is, I submit, the solution to the so-called Münschhausen trilemma concerning how we justify what we know.

This is not to say that we cannot question and correct our impressions. We clearly can, as the correction of illusions and biases exemplify, yet our knowledge of such corrections is itself a matter of conscious impressions, for instance impressions that inform us about statistics, which help us correct wrong ones. The ultimate justification for our beliefs is still our experience. And this is indeed how we improve our knowledge of the world: new impressions help update and correct old ones, which in turn makes us form better ones, i.e. impressions that represent the world more accurately.

That our knowledge at bottom rests on experience is also not to say that our knowledge rests on a basis of mere assumptions. A good analogy, I believe, is our knowledge of fundamental physical constants, which are also in some sense primitive, in that they are measured rather than derived from something else. We have no deeper justification for believing what the values of these constants are than our measurements, yet this is clearly distinct from merely assuming these values. Similarly, I would argue that we observe — “measure”, if you will — the fact that we are conscious and that two plus two is four; we do not merely assume this (there is clearly a difference: to arbitrarily assume your friend is in the same room as you is quite distinct from seeing that your friend is in the same room as you). And as in the case of the measurement of fundamental physical constants, direct measurements in consciousness can of course be erroneous, yet when we consistently measure the same result time and time again by running the same experiment, we do seem reasonably justified — inductively, as always — in believing the validity of the measurement.

That our conscious impressions are what our beliefs ultimately rest upon may seem somewhat weak and unsatisfying, yet only if we fail to keep in mind that conscious impressions are in fact all we ever deal in when it comes to our knowledge. This includes the sense that conscious impressions constitute a poor foundation for knowledge: this sense is itself just another appearance in consciousness, resting on the exact foundation it purportedly doubts. And if a statement like “I believe this because it seems that way in light of what I experience” sounds like a weak foundation for knowledge, this, I believe, is mainly because we usually only use this kind of language when it comes to matters we are uncertain about, such as immediate unexamined impressions. In truth, however, this “it is what seems true in light of my experience” is in fact what we always do, regardless of our degree of certainty. One’s knowledge of textbook information is also “just” another conscious impression.

Phenomenological Positivism: Knowledge Built from a Phenomenological Palette

What we do when we model the world is to represent its features with the different colors of the palette of consciousness. Indeed, this is all we ever can do: consciousness is all we ever know, and hence its colors are all we ever can model and represent the world with at the level of our knowledge.

One can fairly consider this account of knowledge a positivist one, although one that is of a distinctly phenomenological and common sensical sort. For given that consciousness is all we ever know, it is obvious that all facts we know are known via a composition of the various states of consciousness available to us, including the set of facts about the “external world” that can be detected and represented with our conscious minds (and things that fall outside of what we can detect with our conscious minds are obviously the things we cannot know).

So although science is often considered beyond unification, and that universal features shared by all sciences seem to have been deemed non-existent by common consensus, it remains trivially true, to me at least, that all forms of knowledge, whether we deem them “scientific” or not, are known in consciousness, and hence that all our knowledge is at least united by this common feature. In a nutshell, our knowledge of the world is a matter of phenomenological models that appear consistent with phenomenologically observed data. And, again, this “appearing consistent with” — or “seeming right” in light of — all the data is, as a mater of justification, ultimately all we have. This, I submit, does not only apply to science in its usual narrow conception, but to reason in general. For instance, this is also how we (ideally) assess the plausibility of different views in, say, ethics and epistemology: by weighing the data, including arguments and counter-arguments, and assessing what seems reasonable in light of it all (and here it is worth being mindful of the fact that genes seem to play a significant role when it comes to what “seems reasonable”, also in the realm of ethics and politics, and hence to be intensely skeptical of the “immediate seemings” of one’s crude intuitions, and to probe them deeper).

In this way, this account of knowledge and reason actually breaks down the usual empiricism-rationalism dichotomy: all processes of thought and reasoning are also phenomenally observed sensations, and hence not something different from “observations.” They are indeed themselves impressions — more doubtable data — that influence our view and assessment of the world. Rationalism, as in logical reasoning, is just another mode of empiricism and experiment, one that has strengths and weaknesses like all other “experimental devices”.

It is worth noting that this account of our knowledge, and reason more generally, does not amount to mere Bayesianism in any usual sense. For while Bayesian updating surely shares this general feature of being a matter of updating and estimating degrees of certainty based on all available evidence, and while much of our own updating is overtly Bayesian — for instance, many of us have made updates in our views based on formal Bayesian calculations — there is much more to our knowledge and our updating of our beliefs than mere formal calculations with numerical probabilities. Not all available evidence is represented, or even representable, as numerical probabilities; for a person who does not know what it is like to experience, say, sounds and sights, no amount of formal Bayesian calculations is going to shed light on the matter. One must experience these things to know what they are like. Bayesian updating is merely the formal special case of the more general inductive method of estimating what seems right in light of the doubtable data/experience we have accumulated so far.

Do We Have Faith in Induction/Science?

A notion one often hears from religious scholars is that faith in religious claims is no less reasonable than belief in the facts we know from the sciences, since the latter ultimately rest on faith as well: they rest on faith in reason. Yet is this true? In a nutshell, no.

Science is the process of learning about the world by observing it. Therefore, one could argue that science rests on the assumption that we can learn about the world by observing it, which is in fact functionally equivalent to the assumption that induction is valid, since learning about the world by observing it requires that patterns that existed in the past still exist today and in the future — the core of induction.

Yet one need not even make this assumption explicitly, since the assumption that we can learn about the world by observing it is one that we cannot not make. In order to even express the belief that one cannot learn from experience of the world, one has already learned from such experience, the experience of one’s own belief. (This inevitability makes it just like the assumption that at least some aspects of memory can be trusted, which is in fact also an equivalent proposition: that we can learn about the world by observing it requires that at least some aspects of our memory is reliable, and for our memory to contain reliable information about the world, it must be possible to learn about the world by observing it.)

Thus, we all implicitly “assume” that we can learn about the world by observing it, whether we are religious or not, and hence making this inescapable “assumption” cannot meaningfully be called a leap of faith. Rather, it is an inescapable fact (one that all other facts rest upon), as there is no intelligible alternative (indeed, the very possibility of intelligibility of any kind rests on learning from observation in the first place, as claims cannot be deemed (un)intelligible if they cannot be learned in the first place). This makes it wholly unlike actual leaps of faith, i.e. believing in things, such as supernatural events, without supporting evidence. The latter is by no means inescapable.

Indeed, claims about some things being a matter of faith only make sense in a context where we have already made “the leap of faith” of accepting that we can learn about the world by observing it, since whether a claim rests on faith is a matter of whether there exists evidence to support it. And all evaluations of evidence must take place in a realm where we have already assumed the relevance of evidence for propositions about the world — i.e. already made the inevitable “assumption” whose status was in question. In other words, in order to assess whether or not something is a matter of faith, we must “assume” the relevance of evidence in the first place; we must accept that we should go with what seems right in light of the doubtable data/experience we have accumulated so far.

One may object that science rests on much more specific assumptions than merely the possibility of learning about the world by observing it, yet, ideally, this should not be the case. For while it is true that specific methodologies have emerged in the sciences over time, the process of science most generally — that is, learning about the world by observing it — is not committed to any specific methodology in principle, which makes all specific methodologies open for revision. If certain methods are shown to be seriously flawed, as has happened before, these methods should be discarded or updated. And this is indeed how the methods we see employed in the sciences today have developed. Placebo-controlled studies and double blind experiments were not assumed by faith to be the way to “do science” from the outset. Rather, these and other sensible methods of discovery were themselves discovered over years of trial-and-error.

Thus, what works best, both when it comes to theories and methods, is itself to be settled with observation and examination, not faith. Based on the fundamental principle of learning from observation, science continually refines its own method. In this way, the process of observing and learning about the world is a self-correcting and self-optimizing one.

Doubting the Apparently Undoubtable

As noted earlier, inductive reasoning has shown us that we have good reason to maintain humility about our beliefs. We know that our memory is fallible. As mentioned above, even mathematical proofs held to be valid by many have turned out to be wrong, and this risk of fallibility not only pertains to the logical deductions made by others, but also to those made by ourselves — the appearance that a logical deduction is valid can turn out to be wrong upon closer examination. It has happened before.

So it seems that we should maintain at least some degree of doubt even when it comes to logical deductions that we seem to have reason to be completely certain of, which is not to say that it is reasonable to have more than a negligible degree of such doubt in most cases.

Yet the above-mentioned doubts merely amount to epistemological doubts, doubts about whether our faculties of reasoning accurately track the deeper patterns of the world. We could also have doubts of a deeper ontological nature, namely about the stability of those patterns themselves. For instance, will the laws of physics as we know them apply tomorrow? What about logico-mathematical truths?

Do such questions even make sense? After all, don’t questions concerning what happens tomorrow, and hence rest on the concept of time, already presuppose some basic laws of physics, or at least some elements from the physical framework as we know it? And doesn’t the meaningfulness of doubts concerning whether our logical framework will at all apply tomorrow also itself rest on the validity of that very framework, e.g. that things are either the case or not the case? After all, all talk of whether something applies or not — is true or not — already takes place in the realm of, and therefore presupposes the sensibility of, logical thought. So what does it even mean to say that this framework might no longer apply when the very coherence of “applying” rests on this framework? It seems self-refuting.

It does. Yet even so, we do seem to have reason to maintain at least some degree of humility about these propositions, one reason being the aforementioned “epistemological doubt” — we know our memory is not entirely reliable, and hence we should admit of the possibility that deductions of the sort made above have a small risk of being wrong. Indeed, this argument for the sensibility of (at least a small amount of) doubt seems to pertain to all arguments, including itself (and also the most undoubtable of ethical positions we may hold).

Second, certain drastic changes, such as changes in certain otherwise lawful physical patterns, do not seem inconceivable; indeed, some cosmological theories predict such changes. Therefore, the claim that at least some apparently solid facts about the world may suddenly change cannot be ruled out deductively, it seems. Might the very fabric of existence suddenly change in radically unexpected ways, thereby perhaps altering physical and mathematical truths as we know them? (Again, on a physicalist view of the world, physics and mathematics cannot be separated, which means that what we may call the uniformity of mathematics depends on at least some degree of uniformity of [what we consider] physics). It seems extremely unlikely, but we cannot exclude it with total certainty.

Lastly, it also seems conceivable that we could have new experiences — on a sufficiently exotic drug, for instance — that would suddenly make the so far inconceivable seem conceivable, and thereby make apparently valid deductions and brute facts appear invalid and untrue. Again, the only justification we have for believing what we believe is, ultimately, that “it just seems true.” And while it may be inconceivable to imagine, say, that mathematical truths could suddenly change, it does not, strangely enough, seem inconceivable that such an apparently inconceivable claim could seem conceivable in a radically different state of mind. And if it can seem right in another state of mind, how can we maintain absolute certainty that that state of mind is more wrong than our own present one is? It seems we can’t.

In sum, it seems that even when it comes to the most outrageous of claims, claims we cannot even make any sense of, some small degree of uncertainty about their status still seems in place, although the appropriate degree may be very small indeed. Everything can reasonably be doubted to some degree. Or so it seems.

[A small side note: In terms of practical implications, this small window of doubt might help one soften up painful certainties, such as certainty in fatalism. For while it might be tempting to some to think about the world as being an unalterable multi-dimensional structure that we cannot change in any strong sense, one must admit that this view could in fact be wrong, and hence that trying to change the world for the better indeed might have some chance of making a difference even in a very strong sense. Either way, it seems like one does not lose anything by trying one’s best.]

Inconsistent Skepticism

Our conscious experience seems to represent a world “out there” that is independent of our own minds. But how do we know this representation is at all accurate? How do we know the truth is not rather some well-known skeptical conjecture — for instance, that our experience is all a dream or a computer simulation?

I think there is a lot to be said against skepticism of this sort, the most important one being that it is inconsistent. Knowledge of dreams and simulations is itself found in our experience, and hence to consistently doubt the validity of our experience requires us to doubt the validity — i.e. the meaningfulness and sensibility — of these notions themselves. Yet in our entertainment of skepticism of this sort, these notions themselves are somehow exempt from skepticism. They stand beyond scrutiny, while virtually all other appearances we know of, and all other beliefs we hold, do not.

What can justify such inconsistent skepticism? Nothing, as far as I can see, especially given that claims of the sort that all we experience could be a dream or a computer simulation seem extremely dubious to say the least. Take the claim that our entire experience is a dream. Does anything we know of actually suggest this in the slightest? Not to my knowledge. The state of our consciousness in our dreams is radically different from our waking state. Indeed, within a dream it is even possible to realize that one is dreaming, and to explore one’s consciousness in that state, as many of us have tried; something similar never happens in our waking state. The only thing that remotely hints that our experience could be a dream is an argument from analogy: Given that our experiences in dreams can seem to convincingly represent the world, yet still turn out to be mere dreams, could our waking state that seems to convincingly represent the world not be a mere dream too?

If dreams were anything like our waking state, this would indeed seem reasonable. Yet the truth is that they are not.

This “the appearance is different” fact may seem to say precious little, yet only if we miss the significance of differences in appearances. By analogy, imagine that you are on holiday in Istanbul. You remember planning the journey, traveling there, being there for the past five days, and presently you are watching the Sultan Ahmed Mosque while feeling the unbearable summer heat. Now, how do you know that you are not, in fact, in Oslo? Well, just about every single appearance in your consciousness suggests that you are not, and hence you are not in much doubt. And reasonably so.

Yet is this really analogous to the difference in appearance between our dreaming and waking state? Not quite, as I would argue that this analogy actually fails to do justice to the actual difference between our waking and dreaming state, a difference that is far greater than the difference between a waking experience of Istanbul and Oslo respectively. Hence, I would argue that we have no more reason to suspect that our present experience is a dream anymore than we have reason to suspect that we, say, live in a completely different city than we thought. Yes, the world, including the basis of our experience, may well turn out to be very different from what we expect in many ways. Yet the specific claim that our experience of the world is a dream — something that takes place in the brain of a sleeping person — is, I would argue, extraordinarily implausible in light of all that we know, especially the enormous difference between the character of our waking and dreaming state.

Even stronger skepticism seems justified in the case of the claim that all we experience is a computer simulation, one reason being that we simply have no evidence that computer simulations can mediate conscious minds like our own in the first place — at least no more evidence than we have for believing that, say, tomatoes can (indeed, tomatoes are in many ways far more similar to human brains in physical terms than computers are). Another good reason to be intensely skeptical is that so-called ancestor simulations are in fact impossible.

A similar degree of skepticism seems apt in the case of the claim that all we experience is the result of a brain in a vat. According to what we know from fields such as physics, chemistry and biology, there is, as Daniel Dennett shows in Consciousness Explained, no way to produce an experience like ours by stimulating a brain in a vat. And if we dismiss such knowledge, we might as well dismiss our belief in the existence of brains in the first place — itself a belief about physics and biology that we do not seem justified in granting a more privileged status than we do other solid facts found in the canons of physics and biology.

And since we are dealing with various skeptical hypotheses, it seems worth pointing out that skepticism about the existence of other minds is on no firmer ground, as it indeed has the exact same epistemic status as doubting the existence of brains does. The existence of brains is only known through our own conscious experience, an experience that, according to what is known in that experience itself, is mediated by a physical brain. Based on this, we draw an inferential arrow that connects our experience to physical brains. We go from experience to physical brain. Therefore, drawing an arrow from brain to experience — whether one’s own or that of others — which is really just to draw the exact same arrow in the opposite direction, is no more problematic. Conclusively, doubting the existence of other minds is really no more reasonable than doubting the existence of one’s own brain.

One may argue that there is a difference when we are talking about different brains from our own, yet one could say the same about one’s future or past brain, which is also different from one’s present one. If one believes that one’s own future brain will be conscious — a brain that is similar yet still different from one’s present one — then how can one maintain that the brains of other beings that are also similar, yet different from one’s present brain are not conscious as well? Similarly, if one believes that one’s ever-changing brain has mediated conscious states in the past, why should the different brain states of others not mediate consciousness as well? To believe they do not is simply inconsistent.

The problem with skeptical conjectures such as the dream and the simulation hypothesis is, again, that they hold that virtually all the appearances we know from our experience are false, yet the appearance of the possibility that the basis of our experience is something radically different from what we thought — yet still something that we know of from our experience, such as the notion of a dream or a simulation — is not subjected to such doubt at all (in spite of an absence of good reasons for believing in such possibilities in the first place). In other words, these conjectures rest on arbitrarily constrained skepticism.

More than that, these skeptical hypotheses also seem to undermine themselves. For if we accept the premise that our experience indeed is a simulation or a dream, what reason do we have for believing that the worldview we are able to draw from it, including any conclusion about dreams and simulations, has any validity beyond our own simulation or dream? If we are living in a dream or a simulation, it seems that what we think we can say with any certainty about the world, including about dreams and simulations, is likely to be wrong to an unimaginable degree, since it is all based on pure dream or simulation itself. Conclusively, accepting any of these conjectures seems to force us to doubt them strongly, even to make it difficult to make sense of them. And being self-undermining is not a virtue of a conjecture.

Again, what we do when we assess the truth of a proposition is, ideally, to judge its plausibility in light of the totality of what we know. And this is exactly what we fail to do when we deem skeptical conjectures of this sort likely. We go with peculiar arguments, propositions, and concepts, and then doubt everything else, thereby ignoring that the meaning, even the coherence, of these arguments and concepts rest, in subtle and not so subtle ways, on all this other knowledge that they supposedly imply we should doubt, thereby inadvertently destroying their own foundations.

Keeping the totality of our knowledge in view and applying our skepticism consistently leads us, I maintain, to a relatively common sense view of the world, at least when it comes to the basics of the basis of our experience. What we know about the world hints that our experience is mediated by a biological brain just as strongly as our experience hints that the Earth is round; nothing really suggests it is not. In my view, we have no good reason to believe that what we experience is, or even could be, a dream or a simulation, while a very great deal — including consistent thinking based on what we know — strongly suggests it is not.

 

This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2016/11/induction-is-all-we-got.html

Blog at WordPress.com.

Up ↑