In Defense of Nuance

The world is complex. Yet most of our popular stories and ideologies tend not to reflect this complexity. Which is to say that our stories and ideologies, and by extension we, tend to have insufficiently nuanced perspectives on the world.

Indeed, falling into a simple narrative through which we can easily categorize and make sense of the world — e.g. “it’s all God’s will”; “it’s all class struggle”; “it’s all the muslims’ fault”; “it’s all a matter of interwoven forms of oppression” — is a natural and extremely powerful human temptation. And something social constructivists get very right is that this narrative, the lens through which we see the world, influences our experience of the world to an extent that is difficult to appreciate.

So much more important, then, that we suspend our urge to embrace simplistic narratives to (mis)understand the world through. In order to navigate wisely in the world, we need to have views that reflect its true complexity; not views that merely satisfy our need for simplicity (and social signaling; more on this below). For although simplicity can be efficient, and to some extent is necessary, it can also, when too much too relevant detail is left out, be terribly costly. And relative to the needs of our time, I think most of us naturally err on the side of being expensively unnuanced, painting a picture of the world with far too few colors.

Thus, the straightforward remedy I shall propose and argue for here is that we need to control for this. We need to make a conscious effort to gain more nuanced perspectives. This is necessary as a general matter, I believe, if we are to be balanced and well-considered individuals who steer clear of self-imposed delusions and instead act wisely toward the betterment of the world. Yet it is also necessary for our time in particular. More specifically, it is essential in addressing the crisis that human conversation seems to be facing in the Western world at this point in time. A crisis that largely seems the result of an insufficient amount of nuance in our perspectives.

Some Remarks on Human Nature

There are certain facts about the human condition that we need to put on the table and contend with. These are facts about our limits and fallibility which should give us all pause about what we think we know — both about the world in general as well as ourselves in particular.

For one, we have a whole host of well-documented cognitive biases. There are far too many for me to list them all here, yet some of the most important ones are: confirmation bias (the tendency of our minds to search for, interpret, and recall information that confirms our pre-existing beliefs); wishful thinking (our tendency to believe what we wish were true); and overconfidence bias (our tendency to have excessive confidence in our own beliefs; in one study, people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time). And while we can probably all recognize these pitfalls in other people, it is much more difficult to realize and admit that they afflict ourselves as well. In fact, our reluctance to realize this is itself a well-documented bias, known as the bias blindspot.

Beyond realizing that we have fallible minds, we also need to realize the underlying context that has given rise to much of this fallibility, and which continues to fuel it, namely: our social context — both the social context of our evolutionary history as well as of our present condition. We humans are deeply social creatures, and it shows at every level of our design, including the level of our belief formation. And we need to be acutely aware of this if we are to form reasonable beliefs with minimal amounts of self-deception.

Yet not only are we social creatures, we are also, by nature, deeply tribal creatures. As psychologist Henri Tajfel showed, one need only assign one group of randomly selected humans the letter “A” and another randomly selected group the letter “B” in order for a surprisingly strong in-group favoritism to emerge. This method for studying human behavior is known as the minimal group paradigm, and it shows something about us that history should already have taught us a long time ago: that human tribalism is like gasoline just waiting for a little spark to be ignited.

This social and tribal nature of ours has implications for how we act and what we believe. It is, for instance, largely what explains the phenomenon of groupthink, which is when our natural tendency toward (in-)group conformity leads to a lack of dissenting viewpoints among individuals in a given group, which then, in turn, leads to poor decisions by these individuals.

Indeed, our beliefs about the world are far more socially influenced than we realize. Not just in the obvious way that we get our views from others around us — often without much external validation or testing — but also in that we often believe things in order to signal to others that we possess certain desirable traits, or that we are loyal to them. This latter way of thinking about our beliefs is quite at odds with how we prefer to think about ourselves, yet the evidence for this unflattering view is difficult to deny at this point.

As authors Robin Hanson and Kevin Simler argue in their recent book The Elephant in the Brain, we humans are strategically self-deceived about our own motives, including when it comes to what motivates our beliefs. Beliefs, they argue, serve more functions than just the function of keeping track of what is true of the world. For while beliefs surely do have this practical function, they also often serve a very different, very social function, which is to show others what kind of person we are and what kind of groups we identify with. This makes beliefs much like clothes, which have the practical function of keeping us warm while, for most of us, also serving the function of signaling our taste and group affiliations. And one of Hanson’s and Simler’s essential points is that we are not aware of the fact that we do this, and that there is an evolutionary reason for this: if we realized (clearly) that we believe certain things for social reasons, and if we realized that we display our beliefs with overconfidence, we would be much less convincing to those we are trying to convince and impress.

Practical Implications of Our Nature

This brief survey of the natural pitfalls and fallibilities of our minds is far from exhaustive, of course. But it shall suffice for our purposes. The bottom line is that we are creatures who naturally want our pre-existing beliefs confirmed, and who tend to display too high levels of confidence about these beliefs. We do this in a social context, and many of the beliefs we hold serve non-epistemic functions within this context, which include the tribal function of showing others how loyal we are to certain groups, as well as how worthy we are as friends and mates. In other words, we have a natural pull to impress our peers, not just with our behavior but also with our beliefs. And, for socially strategic reasons, we are quite blind to the fact that we do this.

So what, then, is the upshot of all of this? It is clear, I submit, that these facts about ourselves do have significant implications for how we should comport ourselves. In short, they imply that we have a lot to control for if we aspire to have reasonable beliefs — and our own lazy mind, with all its blindspots and craving for simple comfort, is not our friend in this endeavor. The fact that we are naturally biased and tendentious implies that we should doubt our own beliefs and motives. And it implies that we need to actively seek out the counter-perspectives and nuance that our confirmation bias, this vile bane of reason, so persistently struggles to keep us from accessing.

Needless to say, these are not the norms that govern our discourse at this point in time. Sadly, what plays out right now is mostly the unedited script of tribal, confirmation biasing human nature, unfazed by the prefrontal interventions that seem just about the only hope for our rewriting this script into something better.

The Virtues of the Good Conversationist

Let us elaborate a bit on the implications of our fallibility, and the precepts we should follow if we want to control for these unflattering tendencies and pitfalls of human nature. Recall the study cited above: people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time. The fact that we can be so wrong — more than 15 percent of the time when we claim perfect certainty(!) — implies, among other things, that when someone tells us we are wrong, we seem to have a prima facie reason to listen and try our best to understand what they are saying, as they may just be right. Of course, the tendency toward overconfidence will all but surely be shared by this other person as well, who could also be wrong. And our task then lies in finding out which it is. This is the importance of conversation. It is nothing less than the best tool we have, collectively, against being misguided. And that is why we have to become good conversationists.

What does it take to become that? At the very least, it requires an awareness of our biases, and a deliberate effort to counteract them.

Countering Confirmation Bias

To counteract our confirmation bias, we need to loosen our attachment to pre-existing beliefs, and to seek out viewpoints and arguments that may contradict them. The imperative of doing this derives from nothing less than the basic epistemic necessity of taking all relevant data into consideration rather than a small cherry-picked selection. For the truth is that we all cherry-pick data a little bit here and there in favor of our own position, and so by hearing from people with opposing views, and by examining their cherry-picked data and their particular emphasis and interpretation, we will, in the aggregate, tend to get a more balanced picture of the issue at hand.

And, importantly, we should strive to engage with these other views in a charitable way: by assuming good faith on behalf of the proponents of any position; by trying to understand their view as well as possible; and by then engaging with the strongest possible version of that position (i.e. the steel man rather than the straw man version of it). Indeed, it is difficult to overstate just how much the state of human conversation would improve if we all just followed this simple precept: be charitable.

Countering Wishful Thinking

Our propensity for wishful thinking should make us skeptical of beliefs that are convenient and which match up with what we want to be true. If we want there to be a God, and we believe there is one, then this should make us at least a little skeptical of this convenient belief. By extension, our attraction toward the wishful also implies that we should pay more attention to information and arguments that suggest conclusions which are inconvenient or otherwise contrary to what we wish were true. Do we believe the adoption of a vegan lifestyle would be highly inconvenient for us personally? Then we should probably expect to be more than a little biased against any argument in its favor, and indeed, if we suspect the argument has merit, be inclined to ignore it altogether rather than giving it a fair hearing.

Countering Overconfidence Bias

When it comes to correcting for our overconfidence bias, the key virtue to embrace is intellectual humility (or at least so it seems to me). That is, to admit and speak as though we have a limited and fallible perspective on things. In this respect, it also helps to be aware of the social factors that might be driving our overconfidence much of the time. As noted above, we often express certainty in order to signal to third parties, as well as to instill strong doubts in those we engage with. And we do this without being aware of it. This social function of confidence should lead us to update away from bravado and toward being more measured. Again: to be intellectually humble.

Countering In-Group Conformity

Another way in which social forces make us less than reasonable is by compelling us to conform to our peers. As hinted above, our beliefs are subject to in-group favoritism, which highlights the importance of being (especially) skeptical of the beliefs we share with groups that we affiliate closely with, and to practice playing the devil’s advocate against these beliefs. And, by extension, to try to be extra charitable toward the beliefs held by the notional out-group, whether it be “the Left” or “the Right”, “the religious” or “the atheists”.

Beyond that, we should also be aware that our minds likely often paint the out-group in an unfairly unfavorable light, viewing them as much less sincere and well-intentioned — one may even say more evil — than they actually are, however misguided (we may think) their particular views are. And it seems a natural temptation for us to try to score points by publicly broadcasting such a negative view of the out-group as a way of showing our in-group just how unlikely we are to change affiliation.

Thinking in Degrees of Certainty

It seems that we have a tendency to express our views in a very binary, 0-or-1 fashion. We tend to be either clearly for something or clearly against it, be it abortion, efforts to prevent climate change, the death penalty, or universal health care. And it seems to me that what we express outwardly is generally much more absolutist, i.e. more purely 0 or 1, than what happens inwardly, under the hood — perhaps even underneath our conscious awareness — where there is probably more conflicting data than what we are aware of and allow ourselves to admit.

I have observed this pattern in conversations: people will argue strongly for a given position which they continue to insist on, until, quite suddenly it seems, they say that they accept the opposite conclusion. In terms of their outward behavior, they went from 0 to 1 quite rapidly, although it seems likely that the process that took place underneath the hood was much more continuous — a more gradual move from 0 to 1, where the signal “express 1 now” was then passed at some threshold.

An extreme example of similar behavior found in recent events is that of Omarosa Manigault Newman, who was the so-called Director of African-American Outreach for Donald Trump’s presidential campaign in 2016. She went from describing Trump in adulating terms and calling him “a trailblazer on women’s issues”, to being strongly against him and calling him a racist and a misogynist. It seems unlikely that this shift was based purely on evidence she encountered after she made her adulating statements. There probably was a lot of information in her brain that contradicted the claim of Trump’s status as such a trailblazer, but which she ignored and suppressed. And the reason why is quite obvious: she had a political aim. She needed to broadcast the message that Trump was a good person to further a campaign and to further her own career tied to this campaign. It was about signaling first, not truth-tracking (which is not to say that she did not sincerely believe what she said, but her sincere belief was probably just conveniently biased).

The important thing to realize, of course, is that this applies to all of us. We are all inclined to be more like a politician than a scientist in many situations. In particular, we are all inclined to believe and express either a pure 0 or a pure 1 for social reasons. And the nature of these social reasons may vary. It may be about signaling opposition to someone who believes the opposite, or about signaling loyalty to a given group (few groups rally around low-credence claims). It may also be about signaling that we have a mind that is of a strong conviction. After all, doubt is generally not sexy. Just consider the words we usually associate with it, such as uncertainty, confusion, and indecision. Certainty, on the other hand, signals strength, and is commonly associated with more positive words such as decisiveness, confidence, resoluteness, and firmness. And so, for this reason as well, it only seems natural that we would generally be inclined to signal certainty rather than doubt, even when we do not possess anything close to justified certainty.

Fortunately, there exists a corrective for our tendency toward 0-or-1 thinking, which is to think in terms of credences along a continuum, ranging from 0 to 1. For one, this would constitute a more honest form of communication, in that it would force us to carefully weigh all the information that our brain keeps hidden from us, as well as to express its underlying credence in detail — as opposed to merely expressing whether this credence has crossed some given threshold. Yet perhaps even more significantly, thinking in terms of such a continuum would also help subvert the tribal aspect of our either-or thinking by placing us all in the same boat: the boat of degrees of certainty, in which the only thing that differs between us is our level of certainty in any given claim. For example, think how strange it would be for a religious believer to present their religious beliefs by saying that their credence in the existence of a God lies around 93 percent. This is a much weaker statement, in terms of its social signaling function, than a statement such as “I am a Christian”.

Such an honest, more detailed description of one’s beliefs is not good for keeping groups divided by different beliefs. Indeed, it is good for the exact opposite: it helps us move toward a more open and sincere conversation about what we in fact believe and why, regardless of our group affiliations.

Different Perspectives Can Be Equally True

There are two common definitions of the term “perspective” that are quite different, yet closely related at the same time. One is “a mental outlook/point of view”, while the other is “the art of representing three-dimensional objects on a two-dimensional surface”. And they are related in that the latter can be viewed as a metaphor for the former: our particular perspective, the representation of the world we call our point of view, is in a sense a limited two-dimensional representation of a more complex, multi-dimensional reality. A representation that is bound to leave out a lot of information about this reality. The best we can do, then, is to try to paint the two-dimensional canvas that is our mind so as to make it as rich and informative as possible about the complex and many-faceted world we inhabit.

And an important point for us to realize in our quest for more balanced and nuanced views, as well as for the betterment of human conversation, is to realize that seemingly conflicting reports of different perspectives on the same underlying reality can in fact all be true, as hinted by the following illustrations:


Image result for perspective

Image result for perspective

Image result for perspective perceptual


The same object can have very different reflections when viewed from different angles. Similarly, the same events can be viewed very differently by different people who each have their own unique dispositions and prior experiences. And these different views can all be true; John really does see X when he looks at this event, while Jane really does see Y. And, like the square- and circle-shaped reflections above, X and Y need not be incompatible. (A similar sentiment is reflected in the Jain doctrine of Anekantavada.)

And even when someone does get something wrong, they may nonetheless still be reporting the appearance of the world as it is revealed to them as honestly and accurately as they can. For example, to many of us, it really does seem as though the lines in the following picture are not parallel, although they in fact are:


Image result for visual illusions


Which is merely to state the obvious point that it is possible, indeed quite common, to be perfectly honest and wrong at the same time, which is worth keeping in mind when we engage with people whom we think are obviously wrong; they usually think they are right, and that we are obviously wrong — and perhaps even dishonest.

Another important point the visual illusion above hints at is that we should be careful not to confuse external reality with our representation of it. Our conscious experience of the external world is not, obviously, the external world itself. And yet we tend to speak as though it were.

This is an evolutionarily adaptive illusion no doubt, but it is an illusion nonetheless. All we ever inhabit is, in the words of David Pearce, our own world simulation, a world of conscious experience residing in our head. And given that we all find ourselves stuck in — or indeed as — such separate, albeit mutually communicating bubbles, it is not so strange that we can have so many disagreements about what we think reality is like. All we have to go on is our own private, phenomenal cartoon model of each other and the world at large; a cartoon model that may get many things right, but which is also sure to miss a lot of important things.

Framing Shapes Our Perspective

From the vantage point of our respective world simulations, we each interpret information from the external world with our own unique framing. And this framing in part determines how we will experience it, as demonstrated by the following illustration, where one can change one’s framing so as to either see a duck or a rabbit:


Image result for duck rabbit


As well as the following illustration where one’s framing determines whether one sees a cube from above or below — or indeed just a two dimensional pattern without depth:


Image result for dual perspective picture

Sometimes, as in the examples above, our framing is readily alterable. In other cases, however, it can be more difficult to just switch our framing, as when it comes to how different people with different life experiences will naturally interpret the same scenario in very different ways. For instance, a physicist might enter a room and see a lot of interesting physical phenomena there. Air consisting of molecules which bounce around in accord with the laws of thermodynamics; sound waves that travel adiabatically across the room; long lamps dangling in their natural frequency while emitting photons. An artistic person, in contrast, may enter the same room and instead see a lot of people. And this person may view these people as a sea of flowing creative potential in the process of being unleashed, inspired by deeply emotional music and a warm glowing light that fits perfectly with the atmosphere of the music.

Although these two perspectives on the events of this same room are very different, none of them are necessarily wrong. Indeed, they seem perfectly compatible, despite their representing what seems to be two very different cognitive styles — two different paradigms of thinking and perceiving, one may say. And what is important to realize is that a similar story applies to all of us. We all experience the world in different ways, due to our differing biological dispositions, life experiences, and vantage points. And while these different experiences are not necessarily incompatible, it can nonetheless be difficult to achieve mutual understanding between such differing perspectives.

Acknowledging Many Perspectives Is Not a Denial of Truth

It should be noted, however, that none of the above makes a case for the relativistic claim that there are no truths. On the contrary, what the above implies is indeed that it is a truth — as hard and strong as could be — that different individuals can have different perspectives and experiences in reaction to the same external reality, and that it is possible for such differing perspectives to all have merit, even if they seem in tension with each other. And to acknowledge this fact by no means amounts to the illogical statement that no given perspective can ever be wrong and make false claims about reality — that, sadly, is clearly all too common. This middle-position of rejecting both the claim that there is only one valid perspective and the claim that there are no truths is, I submit, the only reasonable one on offer.

And the fact that there can be merit in a plurality of perspectives implies that, beyond conceiving of our credences along a continuum ranging from 0 to 1, we also need to think in terms of a diversity of continua in a more general sense if we are to gain a fuller, more nuanced understanding that does justice to reality, including the people around us with whom we interact. More than just thinking in terms of shades of grey found in-between the two endpoints of black and white, we need to think in terms of many different shades of many different colors.

At the same time, it is also important to acknowledge the limits of our understanding of other minds and experiences we have not had. This does not amount to some obscure claim about how we each have our own, wholly incommensurable experiences, and hence that mutual understanding between individuals with different backgrounds is impossible. Rather, it is simply to acknowledge that psychological diversity is real, which implies that we should be careful to avoid the so-called typical mind fallacy, as well as to acknowledge that at least some experiences just cannot be conveyed faithfully with words alone to those who have not had them. And this does, at the very least, pose a challenge to the endeavor of communicating with and understanding each other. For example, most of us have never tried experiencing extreme forms of suffering, such as the experience of being burned alive. And beyond describing this class of experiences with thin yet accurate labels such as “horrible” and “bad”, most of us are surely very ignorant — luckily for us.

However, this realization that we do not know what certain experiences are like is in fact itself an important insight that does help expand and advance our outlook. For it at least helps us realize that our own understanding, as well as the range and variety of experiences we are familiar with, are far from exhaustive. With this realization in mind, we can look upon a state of absolute horror and admit that we have virtually no understanding of just how bad it is, which, I submit, comprises a significantly greater understanding than does beholding it with both the same absence of comprehension, and the absence of the admission of this absent comprehension. The realization that we are ignorant itself constitutes knowledge of sorts. The kind of knowledge that makes us rightfully humble.

Grains of Truth in Different Perspectives

Even when two different perspectives indeed are in conflict with each other, this does not imply that they are necessarily both entirely wrong, as there can still be significant grains of truth in both of them. Most of today’s widely endorsed perspectives and narratives make a wide range of claims and arguments, and even if not all of these stand up to scrutiny, many of them often do, at least when modified slightly. And part of being charitable is to seek out such grains of truth in a position one does not agree with. This can also help us realize which truths and plausible claims that might motivate people to support (what we consider) misguided views, and thus help further mutual understanding among us. Therefore, this seems a reasonable precept to follow as well: sincerely ask what might be the grains of truth in the views you disagree with. One can almost always find something, and often a good deal more than one would naively have thought.

As mentioned earlier, it is also possible for different perspectives to support what seems to be very different positions on the same subject without necessarily being wrong in any way; if they have different lenses, looking in different directions. Indeed, different perspectives on the same issue are often merely the result of different emphases which each focus on certain framings and sets of data rather than others. And thus seemingly incompatible perspectives may in fact all be right about the particular aspects of a given subject that they emphasize, which is why it is important to seek out different treatments of the same subject from multiple angles. Oftentimes, it is not that novel perspectives show our current perspective wrong, but merely that it is not sufficiently nuanced — i.e. that we have failed to take certain things into account, such as alternative framings, particular kinds of data, and critical counter-considerations.

This is, I believe, a common pattern in human conversation, and another sense in which we should be mindful of the possible existence of different grains of truth, namely: when different views on the same subject are all completely true, yet where each of them merely comprise a small grain in the larger mosaic that is the complete truth. And hence we should remind ourselves, as stated in the illustration above, that just because we are right does not mean that the person who says something else on the same subject is wrong.

Having made a general case for nuance, let us now turn our eyes toward our time in particular, and why it is especially important to actively seek to be nuanced and charitable today.

Our Time Is Different

Every period in history likely sees itself as uniquely unique. Yet in terms of how humanity communicates, it is clear that our time indeed is a highly unique one. For never before in history has human communication been so screen-based as it is today. Or, expressed equivalently: never before has so much of our communication been without face-to-face interaction. And this has significant implications for how and what we communicate.

It is clear that our brains process communication through a screen in a very different way. Writing a message in a Facebook group consisting of a thousand people does not, for most of us, feel remotely the same as delivering the same message in front of a thousand people crowd. And a similar discrepancy between the two forms of communication is found when we interact with just a single person, which is no wonder. Communication through a screen consists of a string of black and white symbols. Face-to-face interaction, in contrast, is composed of multiple streams of information. We read off important cues from a person’s face and posture, as well as from the tone and pace of their voice.

All this information provides a much more comprehensive, one might indeed say more nuanced, picture of the state of mind of the person we are interacting with. We get the verbal content of the conversation (as we would through a screen), plus a ton of information about the emotional state of the other. And beyond being informative, this information also serves the purpose of making the other person relatable. It makes the reality of their individuality and emotions almost impossible to deny, which is much less true when we communicate through a screen.

Indeed, it is as though these two forms of communication activate entirely different sets of brain circuits. Not only in that we communicate via a much broader bandwidth and likely see each other as more relatable when we communicate face-to-face, but also in that face-to-face communication naturally motivates us to be civil and agreeable. When we are in the direct physical presence of someone else, we have a strong interest in keeping things civil enough to allow our co-existence in the same physical space. When we interact through a screen, however, this is no longer a necessity. The notional brain circuitry underlying peaceful co-existence with antagonists can more safely be put on stand-by mode.

The reality of these differences between the two forms of communication has, I would argue, some serious implications. First of all, it highlights the importance of being aware that these two forms of communication indeed are very different, and that we are, in various ways, quite handicapped communicators when we communicate through a screen, often entering a state of mind that perhaps only a sociopath would be able to maintain in a face-to-face interaction. A handicap that further implies that we should be even more aware of the tendencies reviewed above when interacting through a screen, as these tendencies then become much easier and more tempting to engage in. It is (even) more difficult to relate to those who disagree with us, and we have (even) less of an incentive to understand them properly and be civil. Which is to say that it is (even) more difficult to be charitable. Written communication through a screen makes it easier than ever before to paint the out-group antagonists we interact with in an unreasonably unfavorable light.

And our modern means of communication arguably also make it easier than ever before to not interact with the out-group at all, as the internet has made it possible for us to diverge into our own respective in-group echo chambers to an extent not possible in the past. It is therefore now easy to end up in communities in which we continuously echo data that supports our own narrative, which ultimately gives us a one-sided and distorted picture of reality. And while it may be easier than ever to find counter-perspectives if we were to look for them, this is of little use if we mostly find ourselves collectively indulging in our own in-group confirmation bias. As we often do. For instance, feminists may find themselves mostly informing each other about how women are being discriminated against, while men’s rights activists may disproportionally share and discuss ways in which men are discriminated against. And so by joining only one of these communities, one is likely to end up with a skewed, insufficiently nuanced view of reality.

This mode of interaction has serious sociological implications. Indeed, the change in our style of interaction brought about by the internet is probably in large part why, in spite of the promise technology seemed to hold to connect us with each other, we now appear increasingly balkanized, divided along various lines in ways that feed into our tribal nature all too well. Democrats and republicans, for example, increasingly see each other as a “threat to the nation’s well-being” — significantly more so than they did even just ten years ago. This is a real problem that does not seem to be going away on its own. And one of the greatest hopes we have for improving this situation is, I submit, to become aware of and actively try to control for our own pitfalls. Especially when we interact through screens.

With all the information we have reviewed thus far in mind, let us now turn to some concrete examples of heated issues that divide people today, and where more nuanced perspectives and a greater commitment to being charitable are desperately needed. (I should note, however, that given the brevity of the following remarks, what I write here on these issues is, needless to say, itself bound to fail to express a highly nuanced perspective, as that would require a longer treatment. Nonetheless, the following brief remarks will at least gesture at some ways in which we can generally be more nuanced about these topics.)

Sex Discrimination

As hinted above, there are two groups that seem to tell very different stories about the state of sex discrimination in our world today. On the one hand there are the feminists, who seem to argue that women generally face much more discrimination than men, and on the other, there are the so-called men’s rights activists, who seem to argue that men are, at least in some parts of the world, generally the more discriminated sex. And these two claims surely cannot both be right, can they?

If one were to define sex discrimination in terms of some single general measure, a “General Discrimination Factor”, then no, they could not both be right. Yet if one instead talks about concrete forms of discrimination, then it is entirely possible, and indeed clearly the case, that women are discriminated against more than men in some respects, while men face more discrimination in other respects. And it is arguably also much more fruitful to talk about such concrete cases than it is to talk about discrimination “in general”. (In response to those who insist that it is obvious that women face more discrimination everywhere, almost regardless of how one constructs such a general measure, I would recommend watching the documentary The Red Pill, and, for a more academic treatment, reading David Benatar’s The Second Sexism.)

For example, it is a well-known fact that women have, historically, been granted the right to vote much later than men have, which undeniably constitutes a severe form of discrimination against women. Similarly, women have also historically been denied the right to take a formal education, and they still are in many parts of the world. In general, women have been denied many of the opportunities that men have had, including access to professions in which they were clearly more than competent to contribute. These are all undeniable facts about undeniably severe forms of discrimination.

However, tempting as it may be to infer, none of this implies that men have not also faced severe discrimination in the past, nor that they evade such discrimination today. For example, it is generally only men who have been subject to conscription — i.e. forced duty to enlist for state service, such as in the military. Historically, as well as today, men have been forced by law to join the military and go to war, often without returning — whether they wanted to or not (sure, some men wanted to join the military, yet the fact that some men wanted to do this does not imply that making it compulsive for virtually all men and only men is not discriminatory; as a side note, it should be noted that many feminists have criticized conscription).

Thus, at a global level, it is true to say that, historically as well as today, women have generally faced more discrimination in terms of their rights to vote and pursue an education, as well as in their professional opportunities in general, while men have faced more discrimination in terms of state-enforced duties.

Different forms of discrimination against men and women are also present at various other levels. For example, in one study where the same job application was sent to different scientists, and where half of the applications had a female name on them, the other half a male name, the “female applicants” were generally rated as less competent, and the scientists were willing to pay the “male applicants” more than 14 percent more.

The same general pattern seems reported by those who have conducted a controlled experiment in being a man and a women from “the inside”, namely from transgender men (those who have transitioned from being a woman to being a man). Many of these men report being viewed as more competent after their transition, as well as being listened to more and interrupted less. This also fits with the finding that both men and women seem to interrupt women more than they interrupt men.

At the same time, many of these transgender men also generally report that people seem to care less about them now that they are men. As one transgender man wrote about the change in his experience:

What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.

Such anecdotal reports seem well in line with the finding that both men and women show more aggression toward men than women, as well as with recent research (see page 137) conducted by social psychologist Tania Reynolds, which among other things found that:

[…] female harm or disadvantage evoked more sympathy and outrage and was perceived as more unfair than equivalent male harm or disadvantage. Participants more strongly blamed men for their own disadvantages, were more supportive of policies that favored women, and donated more to a female-only (vs male-only) homeless shelter. Female participants showed a stronger in-group bias, perceiving women’s harm as more problematic and more strongly endorsed policies that favored women.

As these examples show, it seems that men and women are generally discriminated against in different ways. And it is worth noting that these different forms of discrimination are probably in large part the natural products of our evolutionary history rather than some deliberate, premeditated conspiracy (which is obviously not to say that they are ethically justified).

Yet deliberation and premeditation is exactly what is required if we are to step beyond such discrimination. More generally, what seems required is that we get a clearer view of the ways in which women and men face discrimination, and that we then take active steps toward remedying these problems. Something that is only possible if we allow ourselves enough of a nuanced perspective to admit that both women and men are subject to serious discrimination and injustice.


It seems that many progressives are inspired by the theoretical framework called intersectionality, according to which we should seek to understand many aspects of the modern human condition in terms of interlocking forms of power, oppression, and privilege. One problem with relying on this framework is that it can easily become like only seeing nails when all one has is a hammer. If one insists on understanding the world predominantly in terms of oppression and social privilege, one risks seeing it in many places where it is not, as well as overemphasizing its relevance in many cases — and, by extension, to underemphasize the importance of other factors.

As with most popular ideas, there is no doubt a significant grain of truth in some of what intersectional theory talks about, such as the fact that discrimination is a very real phenomenon, that privilege is too, and that both of these phenomena can compound. Yet the narrow focus on only social explanations and versions of these phenomena means that intersectional theory misses a lot about the nature of discrimination and privilege. For example, some people are privileged to be born with genes that predispose them to be very happy, while others have genes that dispose them to have chronic depression. Such two people may be of the same race, gender, and sexuality, and they may be equally able-bodied. Yet such two people will most likely have very different opportunities and quality of life. A similar thing can be said about genetic differences that predispose individuals to have a higher or lower IQ, as well as about genetic differences that make people more or less physically attractive.

Intersectional theory seems to have very little to say about such cases, even as these genetic factors seem able to impact opportunities and quality of life to a similar degree as discrimination and social exclusion. Indeed, it seems that intersectional theory actively ignores, or at the very least underplays, the relevance of such factors — what may be called biological privileges in general — perhaps because they go against the tacit assumption that inequity and other bad things must be attributable to an oppressive agent or social system in some way, as opposed to just being the default outcome one should expect to find in an apathetic universe.

In general, it seems that intersectional theory significantly underestimates the importance of biology, which is, of course, by no means a mistake that is unique to intersectionality in particular. And it is indeed understandable how such an underestimation can emerge. For the truth is that many of the most relevant human traits, including those of personality and intelligence, are strongly influenced by both genetic and environmental factors. Indeed, around 40-60 percent of the variance of such traits tends to be explained by genetics, and, consequently, the amount of variance explained by the environment lies roughly in this range as well. This means that, with respect to these traits, it is both true to say that cultural factors are extremely significant, and to say that biological factors are extremely significant. And the mistake that many seem to make, including many proponents of intersectionality, is to believe that one of these truths rules out the other.

Another critique one can direct toward intersectional theory is that it often makes asymmetrical claims about how one group, “the privileged”, are unable to understand the experiences of another group of individuals, “the unprivileged”, whatever form the privilege and lack thereof may take. Yet it is rarely conceded that this argument can also, with roughly as much plausibility, be made the other way around: that the (allegedly) unprivileged might not fully understand the experience of the (allegedly) privileged, and that they may, in effect, overstate the differences in their experience, and overstate how easy the (allegedly) privileged in fact have it. A commitment to intellectual openness and honesty would at least require us to not dismiss this possibility out of hand.

A similar critique that intersectional theorists ought to contend with is that some of the people whom intersectional theory maintains are discriminated against and oppressed themselves argue that they are not, and some indeed further argue that many of the solutions and practical steps supported by intersectional theorists are often harmful rather than beneficial. Such voices must, at least, be counted as weak anomalies relative to the theory, and considered worthy of serious engagement.

More generally, a case can be made that intersectional theory greatly overemphasizes group membership and identities in its analyses of and attempts to address societal problems. As Brian Tomasik notes:

[…] I suspect it’s tempting for our tribalistic primate brains to overemphasize identity membership and us-vs.-them thinking when examining social ills, rather than just focusing on helping people in general with whatever problems they have. For example, I suspect that one of the best ways to help racial minorities in the USA is to reduce poverty (such as through, say, universal health insurance), rather than exploring ever more intricate nuances of social-justice theory.

A regrettable complication that likely bolsters the focus of intersectionalists is that many people seem to flatly deny that there are any grains of truth to any of the claims intersectional theory makes. Some claim, for instance, that there is no such thing as being transgendered, and that there barely is such a thing as racial or sex discrimination in the Western world today. Rather than serving as a meaningful critique of the overreaches of intersectionality, such unnuanced and ill-informed statements seem likely to only help convince intersectionalists that they are uniquely right while others are dangerously wrong, as well as to suggest to them that more radical tactics may be needed, since current tactics clearly do not work to make other people see basic reality for what it is.

This speaks to the more general point that if we make measured views a rarity, and convince ourselves that all one can do is join either team A or team B — e.g. “camp discrimination exists” or “camp discrimination does not exist” — then we only push people toward division. We risk finding ourselves in a run-away spiral where people try to eagerly signal that they do not belong to the other team, which may in turn push us toward ever more extreme views. The alternative option to this tribal game is to simply aspire toward, and express, measured and nuanced views. That might just be the best remedy against such polarization and toward reasonable consensus. Whether our tribal brains indeed want such a consensus is, of course, a separate question.

A final critique I would direct at mainstream intersectional theory is that, despite its strong focus on unjustified discrimination, it nonetheless generally fails to acknowledge and examine what is, I have argued, the greatest, most pervasive, and most harmful form of discrimination that exists today, namely: speciesism, the unjustified discrimination against individuals based on their species membership. The so-called argument from species overlap is rarely examined, nor are the implications that follow, including when it comes to what equality in fact entails. This renders mainstream versions of intersectionality, as a theory of discrimination against vulnerable individuals, a complete failure.

Political Correctness

Another controversial issue closely related to intersectionality is that of political correctness. What do we mean by political correctness? The answer is actually not straightforward, since the term has a rather complex history throughout which it has had many different meanings. Yet one sense of the term that was at least prominent at one point refers simply to conduct and speech that embodies fairness and common decency toward others, especially in a way that avoids offending particular groups of people. In this sense of the term, political correctness is about not referring to people with ethnic slurs, such as “nigger” and “paki”, or homophobic slurs, such as “faggot” and “dyke”. A more recent sense of the term, in contrast, refers to instances where such a commitment to not offend people has been taken too far (in the eyes of those who use the term), which is arguably the sense in which it is most commonly used today.

This then leads us to what seems the quintessential point of contention when it comes to political correctness, namely: what is too far? What does the optimum level of decency entail? And the only reasonable answer, I believe, will have to be a nuanced one found between the two extremes of “nothing is too offensive” and “everything is too offensive”.

Some seem to approach this subject with the rather unnuanced attitude that feelings of being offended do not matter in any way whatsoever. Yet this view seems difficult to maintain, however, at least if one is called a pejorative name in an unjoking manner oneself. For most people, such name-calling is likely to hurt — indeed, it can easily hurt quite a lot. And significant amounts of hurt and unpleasantness do, I submit, matter. A universe with fewer, less intense feelings of offense is, other things being equal, better than a universe with more, more intense feelings of offense.

Yet the words “other things being equal” should not be missed here. For the truth is that there can be, indeed there clearly is, a tension between 1) risking to offend people and 2) talking freely and honestly about the realities of life. And it is not clear what the optimal balance is.

Yet what is quite clear, I would argue, is that if we cannot talk in an unrestricted way about what matters most in life, then we have gone too far. In particular, if we cannot draw distinctions between different kinds of discrimination and forms of suffering, and if we are not allowed to weigh these ills against each other to assess which are most urgent, then we have gone too far. For if we deny ourselves a clear sense of proportion with respect to the problems of the world, we end up undermining our ability to sensibly prioritize our limited resources in a world that urgently demands reasonable prioritization. And this is, I submit, much too high a price to pay to avoid the risk of offending people.

Relationship Styles and Promiscuity

Another subject that a lot of people seem to express quite strong and unnuanced positions on is that of sexual promiscuity and relationship styles. For example, some claim that strict monogamy is the only healthy and viable choice for everybody, while others seem to make more or less the same claim about polyamory: that most people would be happier if they were in loving, sexual relationships with more than one person, and that only our modern culture prevents us from realizing this. Similar opinions can be found on the subject of casual sex. Some say it is not a big deal, while others say it is — for everyone.

An essential thing to acknowledge on this subject, it seems, is the reality of individual differences. Most of these strong opinions seem to arise from the fallacious assumption that other people are significantly much like ourselves — i.e. the typical mind fallacy. The truth is that some may well thrive best in monogamous relationships, while others may thrive best in polyamorous relationships; some may well thrive having casual sex, some may not. And in the absence of systematic studies, it is difficult to say which distribution people fall along in these respects — in terms of what circumstance people thrive best in — as well as how much this distribution can be influenced by culture.

None of this is to say that there is no such thing as human nature when it comes to sexuality, but merely that it should be considered an open question just what this nature is exactly, and how much plasticity and individual variation it entails. And we should all admit this much.

Politics and Making the World a Better Place

The subjects of politics and “how to make the world a better place” more generally are both subjects on which people tend to have strong convictions, limited nuance, and powerful incentives to signal group loyalty. Indeed, they are about as good examples as any of subjects where it is important to be charitable and actively seek out nuance, as well as to acknowledge one’s own biased nature.

A significant step we can take toward thinking more clearly about these matters is to adopt the aforementioned virtue of thinking in terms of continuous credences. Just as the expression of a “merely” high credence in the existence of the Christian God is more conducive to open-minded conversation, so is having a “merely” high credence in any given political ideology, principle, or policy likely more conducive to honest and constructive conversations and greater mutual updating.

If nothing else, the fact that the world is so complex implies that we will at least have considerable uncertainty about what the consequences of our actions will be. In many cases, we simply cannot know with great certainty which policy or candidate is going to be best (relative to any set of plausible values) all things considered. This suggests that our strong convictions about how a given political candidate or policy is all bad, and about how immeasurably greater the alternatives would be, are likely often overstated. More generally, it implies that our estimates of which actions that are best to take, in the realm of politics in particular as well as with respect to improving the world in general, should probably be more measured and humble than they tend to be.

For example: what is your credence that Donald Trump was a better choice (with respect to your core values) than Hillary Clinton for the US presidency in 2016? I suspect most people’s credence on this question is either much too low or much too high relative to what can be justified. For even if one thinks his influence is clearly positive or clearly negative in the short term, this still leaves open the question of what the long-term effects will be. If the short-term effects are negative, for instance, it does not seem entirely implausible that there will be a counter-reaction in the future whose effects will end up being better in the long term, or vice versa. This consideration alone should dampen one’s credence somewhat — away from the extremes and closer toward the middle. A similar argument could be made about grave atrocities and instances of extreme suffering occurring today and in the near future: although it seems unlikely, we cannot exclude that these may in fact lead to a future with fewer atrocities and less suffering in the long term. (Note, however, that none of this implies that one should not fight hard for what one believes to be the best thing; even if one has only, say, a 60 percent credence in some action being better than another, it can still make perfect sense to push very hard for this seemingly better option.)

Or, to take another concrete example: would granting everyone a universal basic income be better (relative to your values) than not doing so? Again, being absolutely certain in either a positive or a negative answer to this question is hardly defensible. More reasonable, it seems, would it be to maintain a credence that lies somewhere in-between. (And in relation to what one’s underlying values are, I would argue that this is one of the very first things we need to reflect upon if we are to make a reasonable effort toward making the world a better place.)

A similar point can be made about existing laws and institutions. When we are young and radical, we have a tendency to find existing laws and social structures to be obviously stupid compared to the brilliant alternatives we ourselves envision. Yet, in reality, our knowledge of the roles played by these existing systems, as well as the consequences of our proposed alternatives, will tend to be quite limited in most cases. And it seems wise to admit this much, and to adjust our credences and plans of action accordingly.

A related pitfall worth avoiding is that of believing a single political candidate or policy to have purely good or purely bad effects; such an outcome seems extraordinarily unlikely. In the words of economist Thomas Sowell, there are no perfect solutions in the real world, only trade-offs. Similarly, it is also worth steering clear of the tendency to look to a single intellectual for the answers to all important questions. For the truth is that we all have blindspots and false beliefs, and virtually everyone is going to be ignorant of things that others would consider common knowledge. Indeed, no single person can read and reflect widely and deeply enough to be an expert on everything of importance. Expertise requires specialization, which means that we must look to different experts if we are to find expert views on a wide range of topics. In other words, the quest for a more complete and nuanced outlook requires us to engage with many different thinkers from very different disciplines.

The preceding notes about ways in which we could be more nuanced on various concrete topics are, of course, merely scratching the surface. Yet they hopefully do serve to establish the core point that nuance is essential if we are to gain a balanced understanding of virtually any complicated issue.

Can We Have Too Much Nuance?

In a piece that argues for the virtues of being nuanced, it seems worth asking whether I am being too one-sided. Might I not be overstating the case in its favor, and should I not be a bit more nuanced about the utility of nuance itself? Indeed, might we not be be able to have too much nuance in some cases?

I would be the first to admit that we probably can have too much nuance in many cases. I will grant that in situations that call for quick action, and where there is not much time to build a nuanced perspective, it may well often be better to act on one’s limited understanding rather than a more nuanced, yet harder-won picture. There are many situations like this, no doubt. Yet at the level of our public conversations, this is not often the case. We usually do have time to build a more nuanced picture, and we are rarely required to act promptly. Indeed, we are rarely required to act at all. And, unthinkable as it may seem, it could just be that expressions of agnosticism, and perhaps no public expressions at all on a given hot topic, would tend to serve everyone better than expressions of poorly considered views.

One could perhaps attempt to make a case against nuance with reference to examples where near-equal weight is granted to all considerations and perspectives — reasonable and less reasonable ones alike. This, one may argue, is a bad thing, and surely demonstrates that there is such a thing as too much nuance. Yet while I would agree that weighing arguments so blindly and undiscerningly is unreasonable, I would not consider this an example of too much nuance as such. For being nuanced does not mean giving equal weight to all arguments a posteriori, after all the relevant arguments have been presented. Instead, what it requires is that we at least consider these relevant arguments, and that we strive to be minimally prejudiced toward them a priori. In other words, the quest for appropriately nuanced perspectives demands us to grant equality of opportunity to all arguments, not equality of outcome.

Another objection one may be tempted to raise against being nuanced and charitable is that it implies that we should be submissive and over-accommodating. This does not follow, however. For to say that we should be charitable is not to say that we cannot be firm in our convictions when such firmness is justified, much less that we should ever tolerate disrespect or unfair treatment; we should not. We have no obligation to tolerate bullies and intimidators, and if someone repeatedly fails to act in a respectful, good-faith manner, we have every right, and arguably even good reason, to remove ourselves from them. After all, the maxim “assume the other person is acting in good faith” does not entail that we should not update this assumption as soon as we encounter evidence that contradicts it. And to assert one’s boundaries and self-respect in light of such updating is perfectly consistent with a commitment to being charitable.

A more plausible critique against being nuanced is that it might in some cases be strategically unwise, and that one should instead advocate for one’s views in an unnuanced, polemic manner in order to better achieve one’s objectives, at least in some cases. I think this is a decent point. Yet I think there are also good reasons to think that this will rarely be the optimal strategy when engaging in public conversations. First of all, we should acknowledge that, even if we were to grant this style of communication superior in a given situation, it still seems advantageous to possess a nuanced understanding of the counter-arguments. For, if nothing else, such an understanding would seem to make one better able to rebut these arguments, regardless of whether one then does so in a nuanced way or not.

And beyond this reason to acquire a nuanced understanding, there are also very good reasons to express such an understanding, as well as to treat the counter-arguments in as fair and measured a way as one can. One reason is the possibility that we might ourselves be wrong, which means that, if we want an honest conversation through which we can make our beliefs converge toward what is most reasonable, then we ourselves also have an interest in seeing the best and most unbiased arguments for and against different views. And hence we ourselves have an interest in moderating our own bravado and confirmation bias which actively keep us from evaluating our pre-existing beliefs as impartially as we should, as well as an interest in trying to express our own views in a measured and nuanced fashion.

Beyond that, there are also reasons to believe that people will be more receptive to one’s arguments if one communicates them in a way that demonstrates a sophisticated understanding of relevant counter-perspectives, and which lays out opposing views as strongly as possible. This will likely lead people to conclude that one’s perspective is at least built in the context of a sophisticated understanding, and it might thus plausibly be read as an honest signal that this perspective may be worth listening to.

Finally, one may object that some subjects just do not call for any nuance whatsoever. For example, should we be nuanced about the Holocaust? This is a reasonable point. Yet even here, I would argue that nuance is still important, in various ways. For one, if we do not have a sufficiently nuanced understanding of the Holocaust, we risk failing to learn from it. For example, to simply believe that the Germans were evil would appear the dangerous thing, as opposed to realizing that what happened was the result of primitive tendencies that we all share, as well as the result of a set of ideas which had a strong appeal to the German people for various reasons — reasons that we should seek to understand.

This is all descriptive, however, and so none of it implies taking a particularly nuanced stance on the ethical status of the Holocaust. Yet even in this respect, a fearless search for nuance and perspective can still be of great importance. In terms of the moral status of historical events, for instance, we should have enough perspective to realize that the Holocaust, although it was the greatest mass killing of humans in history, was by no means the only one; and hence that its ethical status is arguably not qualitatively unique compared to other similar events of the past. Beyond that, we should also admit that the Holocaust is not, sadly, the greatest atrocity imaginable, neither in terms of the number of victims it had, nor in terms of the horrors imposed on its victims. Greater atrocities than the Holocaust are imaginable. And we ought to both seriously contemplate whether such atrocities might indeed be actual, as well as to realize that there is a risk that atrocities that are much greater still may emerge in the future.


Almost everywhere one finds people discussing contentious issues, nuance and self-scrutiny seem to be in short supply. And yet the most essential point of this essay is not really one about looking outward and pointing fingers at others. Rather, the point is, first and foremost, that we all need to look into the mirror and ask ourselves some uncomfortable questions. Self-scrutiny can, after all, only be performed by ourselves.

“How might I be obstructing my own quest for truth?”

“How might my own impulse to signal group loyalty bias my views?”

“What beliefs of mine might mostly serve social rather than epistemic functions?”

Indeed, we all need to take a hard look in the mirror and let ourselves know that we are sure to be biased and wrong in many ways. And more than just realizing that we are wrong and biased, we also need to realize that we are limited creatures. Creatures who view the world from a limited vantage point from which we cannot fully integrate and comprehend all perspectives and modes of consciousness — least of all those we have never been close to experiencing.

We need to remind ourselves, continually and insistently, that we should be charitable and measured, and that we should seek out the grains of truth that may exist in different views so as to gain a more nuanced understanding that better reflects the true complexity of the world. Not least ought we remind ourselves that our brains evolved to express overconfident and unnuanced views for social reasons — especially in ways that favor our in-group and oppose our out-group. And we need to do a great deal of work to control for this. We should seek to scrutinize our in-group narrative, and be especially charitable to the out-group narrative.

None of us will ever be perfect in these regards, of course. Yet we can at least all strive to do better.

Why I Used to Consider the Absence of Sentience Tragic

Whether one considers the absence of sentience bad or neutral — or indeed as good as can be — can matter a lot for one’s ethical and altruistic priorities. Specifically, it can have significant implications for whether one should push for smaller or larger future populations.

I used to be a classical utilitarian. Which is to say, I used to agree with the statement “we ought to maximize the net amount of happiness minus suffering in the world”. And given this view, I found it a direct, yet counter-intuitive implication that the absence of sentience is tragic, and something we ought to minimize by bringing about a maximally large, maximally happy population. My aim in this essay is to briefly present what I consider the main reason why I used to believe this, and also to explain why I no longer hold this view. I am not claiming the reasons I had for believing this are shared by other classical utilitarians, yet I suspect they could be, at least by some.

The Reason: Striving for Consistency

My view that the absence of sentience is tragic and something we ought to prevent mostly derived, I believe, from a wish to be consistent. Given the ostensibly reasonable assumption that death is bad, it would seem to follow, I reasoned, that since death merely amounts to a discontinuation of life — or, seen in a larger perspective, a reduction of the net amount of sentience — the reduction of sentience caused by not giving birth to a new (happy) life should be considered just as bad as the end of a (happy) life. This was counter-intuitive, of course, yet I did not, and still do not, consider immediate intuitions to be the highest arbiters of moral wisdom, and so it did not seem that weird to accept this conclusion. The alternative, if I were to be consistent, would be to bring my view of death in line with my intuition that the absence of sentience is not bad. Yet this was too implausible, since death surely is bad.

This, I believe, was the reasoning behind my considering it a moral obligation to produce a large, happy population. To not do it would, in some ways, be the moral equivalent of committing genocide. My view is quite different now, however.

My Current View of My Past View

I now view this past reasoning of mine as akin to a deceptive trick, like a math riddle where one has to find where the error was made in a series of seemingly valid deductions. You accept that death is tragic. Death means less sentient life than continued life, other things being equal. But a failure to bring a new individual into the world also means less sentient life, other things being equal. So why would you not consider a failure to bring an individual into the world tragic as well?

My current response to this line of reasoning is that death indeed is bad, yet that it is not intrinsically so. What is bad about death, I would argue, is the suffering it causes; not the discontinuation of sentience per se (after all, a discontinuation of sentience occurs every night we go to sleep, which we rarely consider bad, much less tragic). This view is perfectly consistent with the view that it is not tragic to fail to create a new individual.

As I have argued elsewhere, it is somewhat to be expected that we humans consider the death of a close relative or group member to be tragic and highly worth avoiding, given that such a death would tend, evolutionarily speaking, to have been costly to our own biological success in the past. In other words, our view that death is tragic may in large part stem from a penalizing mechanism instilled in us by evolution to prevent us from losing fellow assets who served our hidden biological imperative — assets who had invested a lot into us and whom we had invested a lot into in return. And I believe that my considering the absence of sentience tragic was, crudely speaking, a matter of extending this penalizing mechanism so that it pertained to all insentient parts of the universe. An extension I now consider misguided. I now see nothing tragic whatsoever about the fact that there is no sentient life on Mars.

Other Reasons

There may, of course, be other reasons why a classical utilitarian, including my past self, would consider the absence of sentience tragic. For instance, it seems reasonable to suspect us, or at least many of us, to have an inbuilt drive to maximize the number of our own descendants, or to maximize the future success of our own tribe (the latter goal would probably have aligned pretty well with the former throughout our evolutionary history). It is not clear what would count as “our own tribe” in modern times, yet it seems that many people, including many classical utilitarians, now view humanity as their notional tribe.

A way to control for such a hidden drive, then, would be to ask whether we would accept if the universe were filled up with happy beings who do not belong to our own tribe. For example, would we accept if our future light cone were filled up by happy aliens who, in their quest to maximize net happiness, replaced human civilization with happier beings? (i.e. a utilitronium shockwave of sorts.) An impartial classical utilitarian would happily accept this. The question is whether a human classical utilitarian would too?

Explaining Existence

First written: Aug 2018, Last update: Nov 2018.


“Not how the world is, is the mystical, but that it is.”

(“Nicht wie die Welt ist, ist das Mystische, sondern dass sie ist.”)

Ludwig Wittgenstein


Why is there something rather than nothing? How can we explain the fact of existence?

This most fundamental question may be worth pondering for various reasons. Such pondering may help sharpen our thinking about the nature of the world, our place within it, and the scope of our understanding. And it may also just lead us to some significant answers to the question itself.

Is Non-Existence Coherent?

I would argue that the key to (dis)solving this mystery lies in questioning the coherence of the idea that there could be nothing in the first place — the notion that non-existence could exist. For existing is, after all, exactly what non-existence, by definition, does not. Non-being, by definition, cannot be. Yet, in asking why there is not nothing, we are indeed, somehow, imagining that it could. Essentially, what we are asking is: why is there not “non-isness“? Why could non-being not have been? The answer, I submit, is that the being of non-being is a contradiction in terms.

If existence were not the case, this would imply non-existence being the case, which is an incoherent notion. More specifically, to say that non-being could be is to contradict the principle of non-contradiction, as one then asks for something, or rather “nothing”, to both be and not be at the same time.

As David Pearce put it:

“One can apparently state the epistemic possibility of nothing having existed rather than something. But it’s unclear how it could make cognitive sense to talk of the epistemic possibility of nothing-or-other having even been the case. For the notion of something-or-other being the case is about as conceptually primitive as one can get. For just what is the (supposedly non-self-refuting) alternative with which one would be contrasting the generic notion of existence – in the sense of something-or-other being the case – that we have at present? The notion doesn’t seem to make any sense. It’s self-stultifying.”

Why Does Anything Exist“, section nine.

Philosopher Bede Rundle made a similar point: “We cannot conceive of there being nothing, but only of nothing being this or that” (p. 113).

Furthermore, even if we were to assume that non-existence could be the case, we would still end up with the conclusion that it actually cannot. For if non-existence were the case, then its being the case would, quite obviously, be a truth, which implies that this truth would at least (also) exist. And yet this truth is not nothing. In other words, it implies the existence of (more of) something. And such a supposedly empty state would in fact imply other properties as well, such as the property of being one (not two or more, as it contains no separation, nor zero, since it does exist by assumption), as well as the property of being free from contradictions (genuine contradictions could not possibly exist in any possible state of existence, much less one that is purportedly empty). Thus, even the notion of a state with no properties other than its mere being is incoherent.

Another way to realize that there could not possibly be nothing, even if we were to pretend that the notion is coherent, is to think in terms of necessary and contingent facts (following the reasoning of Timothy O’Connor found here). For the suggestion that there might have been nothing amounts to the claim that existence might merely be a contingent, not a necessary fact. Yet the fact that we are here proves that existence was, at the very least, a possibility. In other words, the reality of (at least) the possibility of existence is undeniable. And yet the reality of the possibility of existence is not nothing. It is, in fact, something. Thus, even if we assume that the fact of existence is merely contingent, we still end up with the conclusion that it is in fact necessary. The existence of the mere possibility of existence necessarily implies, indeed amounts to, existence in full, and hence the suggestion that existence may merely be contingent, and that there could instead have been absolutely nothing, is revealed to be impossible and indeed incoherent in this way as well.

This may be considered an answer to why there is something rather than nothing: the alternative is simply incoherent, and hence logically impossible. Only “something” could conceivably be the case. And thus, contra Wittgenstein, the real mystery to explain is indeed how the world is, not that it is; to explain which properties the world has, not that it has any. And part of this mystery is to explain why we ever considered the existence of non-existence — as opposed to a very different state of existence — a coherent possibility in the first place, and, by extension, why we ever considered the non-existence of non-existence any more mysterious than the non-existence of square circles.*

No Purpose or Reason Behind Existence, Only Within

The all-inclusive nature of existence implies that, just as there cannot be a mechanism or principle that lies behind or beyond existence, there could not be a reason or purpose behind it either, since behind and beyond existence lies only that which does not exist. And hence there could not possibly be an ultimate purpose, in this sense at least, behind our being here.

Yet this by no means implies, contrary to what may be naturally supposed, that reasons and purposes, of the most real and significant kinds, do not exist within existence. Indeed, it is obvious that they do. For instance, the ability to pursue purposes and act on reasons has clearly emerged over the course of evolution. Beyond that, it is also clear, at least to me, that some states of the world — especially states of extreme suffering — are truly more disvaluable than others, and hence, I would argue, that we have truly normative reasons to act so as to minimize the realization of such disvaluable states. Indeed, I would argue that this endeavor is our highest and ultimate purpose; how to best pursue it our highest and ultimate question.


*And if, and that arguably is a huge if, existence is identical with what we call “physical existence”, then the argument above shows that a physical world must exist, and that its absence is incoherent. Again, this is provided that we assume existence to be identical with “the physical”, which is just an assumption, although I believe one can make a decent case that we have no strong reasons to believe in such a thing as non-physical existence, and hence no strong reasons to doubt this assumption. And if one then further believes that “the physical” is identical with “the mental” — in other words, if one holds a monist ontology that considers both physical and mental descriptions of the world equally valid — then the argument above shows the necessity of the existence of this monist reality. And all that would then be left to explain, if this assumption happened to be true, is “just” what particular properties and relations that exist within this monist reality.

Beyond that, one can also use the contingency-versus-necessity argument we used above to argue for the necessity of physical existence without assuming that physical existence is coterminous with existence. For the claim that the non-existence of the physical world could have obtained also amounts to claiming that its existence is merely a contingent fact: a possibility that could have not obtained. Yet the fact that the physical world does exist proves that its existence is necessarily (at least) a possibility. Thus, by this reasoning, there must necessarily exist (at least) a potential for the physical world as we know it to emerge. And yet such a potential is not nothing, nor is it non-physical proper, at least not in the widest sense of the term “physical”, which includes not only physical actualities but also physical potentials, provided they exist.

One may here object that the notions of contingency and necessity ultimately do not make sense, or that they are just human ideas that we cannot derive deep metaphysical truths from. Yet it should then be noted that the notion of contingency is exactly what a claim such as “physical reality might not have been” itself rests upon. So if these terms and the argument above make no sense or have no bearing on the actual nature of reality, then neither does the problem that the argument is trying to address in the first place.

Darwinian Intuitions and the Moral Status of Death

“Nothing in biology makes sense except in the light of evolution”, wrote evolutionary biologist Theodosius Dobzhansky. And given that our moral psychology is, at least in large part, the product of our biology, one can reasonably make a similar claim about our moral intuitions: that we should seek to understand these intuitions in light of the evolutionary history of our species. This also seems important for our thinking about normative ethics, since such an understanding is likely to help inform our ethical judgments; by helping us better understand the origin of our intuitive moral judgments, and how they might be biased in various ways.

An Example: “Julie and Mark”

A commonly cited example that seems to demonstrate how evolution has firmly instilled certain moral intuitions into us is the following thought experiment, first appearing in a paper by Jonathan Haidt:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love?

According to Haidt: “Most people who hear the above story immediately say that it was wrong for the siblings to make love […]”. Yet most people also have a hard time explaining this wrongness, given that the risks of inbreeding are rendered moot in the thought experiment. But they still insist it is wrong. An obvious interpretation to make, then, is that evolution has hammered the lesson “sex between close relatives is wrong” into the core of our moral judgments. And given the maladaptive outcomes of human inbreeding, such an intuition would indeed make a lot of evolutionary sense. Indeed, in that context, given the high risk of harm, it even makes ethical sense. Yet in a modern context in which birth control has been invented and is employed, the intuition suddenly seems on less firm ground, at least ethically.

(It should be noted that the deeper point of Haidt’s paper cited above is to argue that “[…] moral reasoning is usually a post hoc construction, generated after a judgment has been reached.” And while it seems difficult to deny that there is a significant grain of truth to this, Haidt’s thesis has also been met with criticism.)

Moral Intuitions About Death: Biologically Contingent

With this idea in the back of our heads — that evolution has most likely shaped our moral intuitions significantly, and that we should perhaps not be that surprised if these intuitions are often difficult to defend within the realm of normative ethics — let us now proceed to look at the concrete issue of death. Yet before we look at the notional “human view of death”, it is perhaps worth first surveying some other species whose members are unlikely to view death in remotely the same way as we do, to see just how biologically contingent our view of death probably is.

For example, individuals belonging to species that practice sexual cannibalism — i.e. where the female eats the male prior to, during, or after copulation — seem most unlikely to view dying in this manner in remotely the same way as we humans would. Indeed, they might even find pleasure in it, both male and female (although in many cases, the male probably does not, especially when he is eaten prior to copulation, since it is not in his reproductive interest, which likely renders it yet another instance of the horrors of nature).

The same can likely be said of species that practice so-called matriphagy, i.e. where the offspring eat their own mother, sometimes while she is still alive. This behavior is also, at least in many cases, evolutionarily adaptive, and hence seems unlikely to be viewed as harmful by the mother (or at least the analogue of “viewed as harmful” found in the minds of these creatures). There may, of course, be many exceptions — cases in which the mother does indeed find herself harmed by, and disapproving of, the act. Yet it nonetheless seems clear that the beings who have evolved to practice this behavior do not view such a death in remotely the same way as a human mother would if her children suddenly started eating her alive.

The final example I wish to consider here is the practice of so-called filial cannibalism: when parents eat their own offspring. This practice is much more common, in terms of the number of species that practice it, compared to the other forms of cannibalism mentioned above, and also a clearer case of convergent evolution, as the species that practice it range from insects to mammals, including some cats, primates, birds, amphibians, fish (where it is especially prevalent), snails, and spiders. Again, we should expect individuals belonging to these species to view deaths of this kind very differently from the way we humans would view such, by any human standard, bizarre deaths. This is not to say that the younglings who are eaten do not suffer a great deal in these cases. They likely often do, as being eaten is often not in their reproductive interests (in terms of propagating their genes), although it may be in the case of some species: if it increases the reproductive success of their parents and/or siblings to a sufficient degree.

The deeper point, again, is that beings who belong to these species are unlikely to feel remotely the same way about these deaths as we humans would if such deaths were to occur within the human realm — i.e. if human parents ate their own children. And more generally: that the evolutionary history of a species greatly influences how it feels about deaths of various kinds, as well as how it views death in general.

Naturally, Most Beings Care Little About Most Deaths

It seems plausible to say that, in most animal species, individuals do not care the least about the death of unrelated individuals within their own species. And we should not be too starry-eyed about humans in this regard either, as it is not clear that we humans, historically, have cared much for people whom we did not view as belonging to our in-group, as the cruelties of history, as well as modern-day tribalism, testify. Only in recent times, it seems, have we in some parts of the world made all of humanity our in-group. Not all sentient beings, sadly, but not merely our own family or ethnic group either, fortunately.

So, both looking at other species, as well as across human history, we see that there appears to be a wide variety of views and intuitions about different kinds of deaths, and how “problematic” or harmful they are. Yet one regard in which there is much less disagreement is when it comes to “the human view of death”. Or more precisely: the natural moral intuitions humans have with respect to the death of someone in the in-group. And I would suspect this particular view to strongly influence — and indeed be the main template for — any human attempt to carve out a well-reasoned and general view of the moral status of death (of any morally relevant being). If this is true, it would seem relevant to zoom in on how we humans naturally view such an in-group death, and why.

The Human View of an In-group Death

So what is the human view of the death of someone belonging to our own group? In short: that it is tragic and something worth avoiding at great costs. And if we take our evolutionary glasses on, it seems easy to make sense of why we would be naturally inclined to think this: for most of our evolutionary history, we humans have lived in groups in which individuals collaborated in ways that benefitted the entire group.

In other words, the ability of any given human individual to survive and reproduce has depended significantly on the efforts of fellow group members, which means that the death of such a fellow group member would be very costly, in biological terms, to other individuals in that group. Something that is worth investing a lot to prevent for these other individuals. Something evolution would not allow them to be indifferent about in the least, much less happy about.

This may help resolve some puzzles. For example, many of us claim to hold a purely sentiocentric ethical view according to which consciousness is the sole currency of moral value: the presence and absence of consciousness, as well as its character, is what matters. Yet most people who claim to hold such a view, including myself, nonetheless tend to view dreamless sleep and death very differently, although both ultimately amount to an absence of conscious experience just the same. If the duration of the conscious experience of someone we care about is reduced by an early death, we consider this tragic. Yet if the duration of their conscious experience is instead reduced by dreamless sleep, we do not, for the most part, consider this tragic at all. On the contrary, we might even be quite pleased about it. We wish sound, deep sleep for our friends and family, and often view such sleep as something that is well-deserved and of great value.

On the view that the presence and absence of consciousness, as well as the quality of this consciousness, is all that matters, this evaluation makes little sense (provided we keep other things equal in our thought experiment: the quality of the conscious life is, when it is present, the same whether its duration is reduced by sleep or early death). Yet from an evolutionary perspective, it makes perfect sense why we would not only evaluate these two things differently, but indeed in completely opposite ways. For if a fellow group member is sleeping, then this is good for the rest of the group, as sleep is generally an investment that improves a person’s contribution to the group. Yet if the person is dead, they will no longer be able to contribute to the group. And if they are family, they will no longer be able to propagate the genes of the family. From a biological perspective, this is very sad.

(The hypothesis sketched out above — that our finding the death of an in-group member sad and worth avoiding at great costs is in large part due to their contribution to the success of our group, and ultimately our genes — would seem to yield a prediction: we should find the death of a young person who is able to contribute a lot to the group significantly more sad and worth avoiding compared to the death of an old person who is not able to contribute. And this is even more true if the person is also a relative, since the young person would have the potential to spread family genes, whereas a sufficiently old person would not.)


So what follows in light of these considerations about our “natural” view of the death of an in-group member? I would be hesitant to draw strong conclusions from such considerations alone. Yet it seems to me that they do, at the very least, give us reason to be skeptical with respect to our immediate moral intuitions about death (indeed, I would argue that we should be skeptical of our immediate moral intuitions in general). With respect to the great asymmetry in our evaluation of the ethical status of dreamless sleep versus death, two main responses seem available if one is seeking to make a pure sentiocentric position consistent (to take that fairly popular ethical view as an example).

Either one can view conscious life reduced by sleep as being significantly more bad, intrinsically, than what we intuitively evaluate it to be (classical utilitarians may choose to adopt this view, which could, in practice, imply that one should work on a cure for sleep, or at least to reduce sleep in a way that keeps quality of life intact). Or, one can view conscious life reduced by an early death as being significantly less bad, again intrinsically, than our moral intuitions hold. (One can, of course, also opt for a middleroad that maintains that we both intuitively underestimate the intrinsic badness of sleep while overestimating the intrinsic badness of death, and that we should bring our respective evaluations of these two together to meet somewhere in the middle.)

I favor the latter view: that we strongly overestimate the intrinsic badness of death, which is, of course, an extremely unpalatable view to our natural intuitions, including my own. Yet it must also be emphasized that the word “intrinsically” is extremely important here. For I would indeed argue that death is bad, and that we should generally view it as such. But I believe this badness is extrinsic rather than intrinsic: because death generally has bad consequences for sentient beings, including that the process of dying itself tends to involve a lot of suffering (where I would view this suffering as intrinsically bad, yet not the end of the life per se). And furthermore, I would argue that we should consider death a bad and harmful thing (as I indeed do) not just because this belief is accurate, but also because not doing so has bad consequences as well.

An Ethic of Survival

With respect to ethics and death, I recently encountered an interesting perspective in an exchange with Robert Daoust. He suggested, as I understood him, that the fundamental debate in ethics is ultimately one between an ethic of survival on the one hand, and an ethic of concern for sentience on the other. And he further noted that, even when we sincerely believe that we subscribe to the latter, we often in fact do support the survivalist ethic, for strong evolutionary reasons. A view according to which, even if life is significantly dominated by suffering, survival should still be our highest goal.

I find this view of Daoust’s interesting, and I certainly recognize strong survivalist intuitions in myself, even as I claim to hold, and publicly defend, values focused primarily on the reduction of suffering. And one can reasonably wonder what the considerations surveyed above, as well as similar considerations about the priorities and motives that evolution has naturally instilled in us, imply for our evaluation of such a (perhaps tacitly shared) survivalist ethic?

I would tentatively suggest that they imply we should view this survivalist ethic with skepticism. We should expect evolution to have given us a strong urge for survival at virtually any cost, and to view survival — if not of our own individual bodies, then at least of our own group and bloodline — as being intrinsically important; arguably even the most important thing of all. Yet I would argue that this is an implausible ethical view. Specifically, to accept continued survival at virtually any cost, including the cost of increasing the net amount of extreme suffering in the world, is, I would argue, highly implausible. Beyond that, one can argue that we, for evolutionary reasons, also wildly overestimate the ethical badness of an empty world, and grossly misjudge the value of the absence of sentience. Indeed, on a pure sentiocentric view, such an absence is just as good as deep, dreamless sleep. And what is so bad about that?

A Brief Note on Eternalism and Impacting the Future

Something I find puzzling is that many people in intellectual circles seem to embrace the so-called eternalist view of time, which holds that the past, present, and future all equally exist already, yet at the same time, in terms of practical ethics, these same people focus exclusively on impacting the future. These two positions do not seem compatible, and it is interesting that no one seems to take note of this, and that no attempt seems to be made at reconciling them, or otherwise examining this issue. 

For why, given an eternalist view of time, should one focus on impacting the future rather than the past? After all, the eternalist view of time amounts precisely to the rejection of the common sense view that the past is fixed while the future is not, which is the common sense view of time that seems to underpin our common sense focus on trying to impact the future rather than the past. So how can one reject the common sense view of time that seems to underlie our common sense practical focus, yet then still maintain this focus? If the past and the future equally exist already, why focus more on trying to impact one rather than the other?

The only attempted reply I have heard to this question so far, which came from Brian Tomasik, is that if, hypothetically, the present were different, then the future would be different, and hence it makes sense to focus on such changes that would render the future different. The problem, however, is that the same argument applies to the past: if, hypothetically, the present were different, then, for the equations of physics to be consistent, the past would also have to be different. Tomasik seemed to agree with this point. So I fail to see how this is an argument for focusing on impacting the future rather than the past given an eternalist view of time.

Possible Responses

There are various ways to respond to this conundrum. One can, for instance, try to argue that there is no conflict between eternalism and focusing only on impacting the future (which seems the prevailing assumption, but I have yet to see it defended). Another path one could take is to argue that we in fact should focus on impacting the past just as much as the future (something I find highly dubious). Alternatively, one could argue that it is just as senseless to try to change the future as it is to change the past (something few would be willing to accept in practice). Lastly, one could take the tension between these two widely esteemed views to imply that there may be something wrong with the eternalist view of time, and at the very least that we should lower our credence in eternalism, given its ostensible incompatibility with other, seemingly reasonable beliefs.

My Preferred Path: Questioning Eternalism

I would be curious to see attempts along any of the four paths mentioned above. I myself happen to lean toward the last one. I think many people display overconfidence with respect to the truth of eternalism. The fact that the equations of the theory of relativity, as they stand, do not demand an ontologically existing “now does not imply that no such thing exists (where this now, it must be noted, is not defined by “clocks all show the same”, as such a now clearly is impossible; yet there is no contradiction whatsoever in the existence of a unique, ontologically real “present” in which initially synchronized clocks show different times). In other words, although the equations of relativity do not demand the existence of such a now, they do not rule it out either. Yet it seems a widely entertained fallacy that they do, and people thus seem to accept that eternalist view as though it were a matter of logical certainty, when it is not. I think this is bad philosophy. And I think it is important to point this out, since false certainties can be dangerous in unexpected ways (for example, if the above-mentioned fallacy led us to falsely conclude that trying to impact the future is senseless).

Beyond that, as I have noted elsewhere, one can also question to what extent it makes sense to say — as eternalists often do, and as the name eternalism itself implies — that all moments exist “always”? After all, doesn’t “always” refer to something occurring over time? The meaning of claims of the sort that “every moment exists always” is, I believe, less obvious than proponents of eternalism appear to think, and seems in need of unpacking.

A General Note on Our Worldview

I think the tension explored here speaks to a more general point about our worldview, namely that we often do not derive the more practical views we hold (such as the view that we can influence the future but not the past), from our fundamental ontological theories of how the world works. Instead, such views are often derived mostly from tacit common sense notions and intuitions (which is not to say that these views should necessarily be rejected, and certainly not on this ground alone). This means that sometimes — quite often, in fact — the views we hold on various subjects, such as the philosophy of time and practical ethics, are scarcely compatible. The project of bringing the various beliefs we hold across these different areas in concert is, I believe, an important and potentially fruitful one, for our theoretical views in themselves, as well as for our pratical efforts to act reasonably in the world.

The Endeavor of Reason

“[…] some hope a divine leader with prophetic voice
Will rise amid the gazing silent ranks.
An idle thought! There’s none to lead but reason,
To point the morning and the evening ways.”

— Abu al-ʿAlaʾ al-Maʿarri


What is reason?

One could perhaps say that answering this question itself falls within the purview of reason. But I would simply define reason as the capacity of our minds to decide or assess what makes the most sense, or seems most reasonable, all things considered.

This seems well in line with other definitions of reason. For instance, Google defines reason as “the power of the mind to think, understand, and form judgements logically”, and Merriam-Webster gives the following definitions:

(1) the power of comprehending, inferring, or thinking[,] especially in orderly rational ways […] (2) proper exercise of the mind […]

These definitions all seem to raise the further question of what terms like “logically”, “orderly rational ways”, and “proper” then mean in this context.

Indeed, one may accuse all these definitions of being circular, as they merely seem to deflect the burden of defining reason by referring to some other notion that ultimately just appears synonymous with, and hence does not reductively define, reason. This would also seem to apply to the definition I gave above: “the ability to decide or assess what seems most reasonable all things considered”. For what does it mean for something to “seem most reasonable”?

Yet the open-endedness of this definition does not, I submit, render it useless or empty by any means, any more than defining science in open-ended terms such as “the attempt to discover what is true about the world” renders this definition useless or empty.

Reason: The Core Value of Universities and the Enlightenment

At the level of ideals, working out what seems most reasonable all things considered is arguably the core goal of both the Enlightenment and of universities. For instance, ideally, universities are not committed to a particular ethical view (say, utilitarianism or deontology), nor to a particular view of what is true about the world (say, string theory or loop quantum gravity, or indeed physicalism in general).

Rather, universities seem to have a more fundamental and less preconceived commitment, at least in the ideal, which is to find out which particular views, if any, that seem the most plausible in the first place. This means that all views can be questioned, and that one has to provide reasons if one wants one’s view to be considered plausible. 

And it is important to note in this context that “plausible” is a broader term than “probable”, in that the latter pertains only to matters of truth, whereas the former covers this and more. That is, plausibility can also be assigned to views, for instance ethical views, that we do not view as strictly true, yet which we find plausible nonetheless (as in: they seem agreeable or reasonable to us).

For this very reason, it would also be problematic to view the fundamental role of universities to (only) be the uncovering of what is true, as such a commitment may assume too much in many important and disputed academic discussions, such as those about ethics and epistemology, where the question of whether there indeed are truths in the first place, and in what sense, is among the central questions that are to be examined by reason. Yet in this case too, the core commitment remains: a commitment to being reasonable. To try to assess and follow what seems most reasonable all things considered.

This is arguably also the core value of the Enlightenment. At least that seems to be what Immanuel Kant argued for in his essay “What Is Enlightenment“, in which he further argued that free inquiry — i.e. the freedom to publicly exercise our capacity for reason — is the only prerequisite for enlightenment:

This enlightenment requires nothing but freedom—and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters.

And the view that reason should be our core commitment and guide of course dates much further back historically than the Enlightenment. Among the earliest and most prominent advocates of this view was Aristotle, who viewed a life lived in accordance with reason as the highest good.

Yet who is to say that what we find most plausible or reasonable is something we will necessarily be able to converge upon? This question itself can be considered an open one for reasoned inquiry to examine and settle. Kant, for instance, believed that we would all be able to agree if we reasoned correctly, and hence that reason is universal and accessible to all of us.

And interestingly, if one wants to make a universally compelling case against this view of Kant’s, it seems that one has to assume at least some degree of the universality that Kant claimed to exist. And hence it seems difficult, not to say impossible, to make such a case, and to deny that at least some aspects of reason are universal.

Being Reasonable: The Only Reasonable Starting Point?

One can even argue that it is impossible to make a case against reason in general. For as Steven Pinker notes:

As soon as we are having this conversation, as long as we are trying to persuade one another of why you should do something or should believe something, you are already committed to reason. We are not engaged in a fist fight, we are not bribing each other to believe something. We are trying to provide reasons. We are trying to persuade, to convince. As long as you are doing that in the first place — you are not hitting someone with a chair, or putting a gun to their head, or bribing them to believe something — you have lost any argument you have against reason; you have already signed on to reason, whether you like it or not. So the fact that we are having this conversation shows that we are committed to reason. That is the starting point.

Indeed, it seems that any effort to make a reasonable case against reason would have to rest on the very thing it attempts to question, namely our capacity to decide or assess what seems most reasonable all things considered. Thus, almost by definition, it seems impossible to identify a reasonable alternative to the endeavor of reason.

Some might argue that reason itself is unjustified, and that we have to have faith in reason, which then supposedly implies that a dedication to reason is ultimately no more reasonable or solid than is faith in anything whatsoever. Yet this is not the case.

For to say that reason needs justification is not to question reason, but rather to presuppose it, since the arena in which we are expected to provide reasons for what we believe is the arena of reason itself. Thus, if we accept that justifications for any given belief is required, then we have already signed on to reason, whereby we have also rejected faith — the idea that justification for some given belief is not required. Again, in trying to provide a justification for reason, or, for that matter, in trying to provide a justification for not accepting reason, one is already committed to the endeavor of reason: the endeavor of deciding or assessing what seems most reasonable, i.e. most justified, all things considered.

And what reasonable alternative could there possibly be to this endeavor? Which other endeavor could a reasoning agent reasonably choose to pursue? None, it seems to me. Universally, all reasoning agents seem bound to conclude that they have this imperative of reason: that they ought to do what seems most reasonable all things considered. That reason, in this sense, is the highest calling of such agents. Anything else would be contrary to what their own reasoning tells them, and hence unreasonable — by their own accounts.

It Seems Reasonable: The Bedrock Foundation of Reasonable Beliefs

The idea that reason demands justification for any given belief may seem problematic, as it gives rise to the so-called Münchhausen trilemma: what can ultimately justify our beliefs — a circular chain of justifications, an infinite chain, or a finite chain (or web) with brute facts at bottom? Supposedly, none of these options are appealing. Yet I disagree.

For I see nothing problematic about having a brute observation, or reason, at bottom of our chain of justification, which I would indeed argue is exactly what constitutes, and all that ever could constitute, the rock bottom justification for any reasonable belief. Specifically, that it just seems reasonable.

Many discussions go wrong here by conflating 1) ungrounded assumptions and 2) brute observations, which are by no means the same. For there is clearly a difference between believing that a car just drove by you based on the brute observation, i.e. a conscious sensation of, that a car just drove by you, and then merely assuming, without grounding in any reason or observation, that a car just drove by you.

Or consider another example: the fundamental constants in our physical equations. We ultimately have no deeper justification for the values of these constants than brute observation. Yet this clearly does not render our knowledge of these values merely assumed, much less arbitrarily or unjustifiably chosen. This is not to say that our observations of these values are infallible; future measurements may well yield slightly different, more precise values. Yet they are not arbitrary or unjustified.

The idea that brute observation cannot constitute a reasonable justification for a belief is, along with the idea that brute assumptions and brute observations are the same, a deeply misguided one, in my view. And this is not only true, I contend, of factual matters, but of all matters of reason, including ethics and epistemology, whether we deem these fields strictly factual or not. For instance, my own ethical view (which I have argued is a universal one), according to which suffering is disvaluable and ought to be reduced, does not, on my account, rest on a mere assumption. Rather, it rests on a brute observation of the undeniable intrinsic disvalue of the conscious states we call suffering. I have no deeper justification than this, nor is a deeper one required or even possible.

As I have argued elsewhere, such a foundationalist account is, I submit, the solution to the Münschhausen trilemma.

Deniers of Reason

If reason is the only reasonable starting point, why, then, do so many seem to challenge and reject it? There are a few things to say in response to this. First, those who criticize and argue against reason are not really, as I have argued above, criticizing reason, at least not in the general sense I have defined it here (since to criticize reason is to engage in it). Rather, they are, at most, criticizing a particular conception of reason, and that can, of course, be perfectly reasonable (I myself would criticize prevalent conceptions of reason as being much too narrow).

Second, there are indeed those who do not criticize reason, and who indeed do reject it, at least in some respects. These are people who refuse to join the conversation Steven Pinker referred to above; people who refuse to provide reasons, and who instead engage in forceful methods, such as silencing or extorting others, violently or otherwise. Examples include people who believe in some political ideology or religion, and who choose to suppress, or indeed kill, those who express views that challenge their own. Yet such actions do not pose a reasonable or compelling challenge to reason, nor can they be considered a reasonable alternative to the endeavor of reason.

As for why people choose to engage in such actions and refuse to engage in reason, one can also say a few things. First of all, the ability to engage in reason seems to require a great deal of learning and discipline, and not all of us are fortunate enough to have received the schooling and discipline required. And even then, even when we do have these things, engaging in reason is still an active choice that we can fail to make.

That is, doing what we find most reasonable is not an automatic, reflexive process, but rather a deliberate volitional one. It is clearly possible, for example, to act against one’s own better judgment. To go with seductive impulse and temptation — e.g. for sex, a cigarette, or social status — rather than what seems most reasonable, even to ourselves in the moment of weakness.

Reason Broadly and Developmentally Construed

The conception of reason I have outlined here is, it should be noted, not a narrow one. It is not committed to any particular ontological position, nor is it purely cerebral, as in restricted to merely weighing verbal or mathematical arguments. Instead, it is open to questioning everything, and takes input from all sources.

Nor would I be tempted to argue that we humans have some single, immutable faculty of reason that is infallible. Quite the contrary. Our assessments of what seems most reasonable in various domains rests on a wide variety of faculties and experiences, virtually none of which are purely innate. Indeed, these faculties, as well as our range of experience, can be continually expanded and developed as we learn more, both individually and collectively.

In this way, reason, as I conceive of it, is not only extremely broad but also extremely open-ended. It is not static, but rather self-regulating and self-updating, as when we realize that our thinking is tendentious and biased in many ways, and that our motives might not be what we (would like to) think they are. In this way, our capacity for reasoning has taught itself that it should be self-skeptical.

Yet this by no means gives way to pure skepticism. After all, our discovery of these tendencies is itself a testament to the power of our capacity to reason. Rather than completely undermine our trust in this capacity, discoveries of this kind simultaneously show both the enormous weakness and strength of our minds: how wrong we can be when we are not careful to try to be reasonable, and how much better informed we can become if we are. Such facts do not comprise a case against employing our capacity to reason, but rather a case for even more, even more careful employments of this capacity of ours.

Conclusion: A Call for Reason

As noted above, the endeavor of reason is not one that we pursue automatically. It takes a deliberate choice. In order to be able to assess and decide what seems most reasonable all things considered, one must first make an active effort to learn as much as one can about the nature of the world, and then consider the implications carefully.

What I have argued here is that there is no reasonable alternative to doing this; not that there is no possible alternative. For one can surely suspend reason and embrace blind faith, as many religious people do, or embrace unreasoned, incoherent, and self-refuting claims about reality, as many postmodernists do. Or one can go with whatever seems most pleasurable in the moment rather than what seems most reasonable all things considered, as we all do all too often. Yet one cannot reasonably choose such a suspension of reason. Indeed, merely not actively denying reason is not enough. The only reasonable choice, it seems, is to consciously choose to pursue the endeavor of reason.

In sum, I would join Aristotle in viewing reason, broadly construed, as our highest calling. That following what seems most reasonable all things considered is the best, most sensible choice before us. And hence that this is a choice we should all actively make.



The (Non-)Problem of Induction

David Hume claimed that it is:

[…] impossible for us to satisfy ourselves by our reason, why we should extend that experience beyond those particular instances, which have fallen under our observation. We suppose, but are never able to prove, that there must be a resemblance betwixt those objects, of which we have had experience, and those which lie beyond the reach of our discovery.

And this then gives rise to the problem of induction: how can we defend assuming the so-called uniformity of nature that we take to exist when we generalize our limited experience to that which lies “beyond the reach of our discovery”? For instance, how can we justify our belief that the world of tomorrow will, at least in many ways, resemble the world of yesterday? Indeed, how can we justify believing that there will be a tomorrow at all?

A thing worth highlighting in response to this problem is that, even if we were to assume that we have no justification for believing in such uniformity of nature, this would not imply, as may perhaps seem natural to suppose, that we thereby have justification for believing the opposite: that there is no uniformity of nature. After all, to say that the patterns we have observed so far do not predict anything about states and events elsewhere would also amount to a claim about that which lies “beyond the reach of our discovery”, and so this claim seems to face the same problem.

The claims 1) “there is a certain uniformity of nature” and 2) “there is no uniformity of nature” are both hypotheses about the world. And if we look at the limited part of the world about which we do have some knowledge, it is clear that 1) is true about it: patterns at one point in (known parts of) time and space do indeed predict a lot about patterns observed elsewhere.

Does this then mean that the same will hold true of the part of the world that lies beyond the reach of our discovery? One can reasonably argue that we do not have complete certainty that it will (indeed, one can reasonably argue that we should not have complete certainty about any claim our fallible mind happens to entertain). Yet if we reason as scientists — probabilistically, endeavoring to build the picture of the world that seems most plausible in light of all the available evidence — then it does indeed seem justifiable to say that hypothesis 1) seems much more likely to be true of that which lies “beyond the reach of our discovery” than does hypothesis 2) [not least because to say that hypothesis 2) holds true would amount to assuming an extraordinary uniqueness of the observed compared to the unobserved, whereas believing hypothesis 1) merely amounts to not assuming such an extraordinary uniqueness].

And if we think in this way — in terms of competing hypotheses — then Hume’s problem of induction suddenly seems rather vacuous. “You cannot prove that any given hypothesis of this kind is correct.” This seems true (although the fact that we have not found such a proof yet does not imply that one cannot be found), but also quite irrelevant, since a deductive proof is not required in order for us to draw reasonable inferences. To say that we have no purely deductive argument for a given conclusion is not the same as saying that we have no justification for believing it (and if one thinks that it is, then one is also committed to the belief that we have no justification for believing, based on previous experience, that the problem of induction also exists in this very moment; more on this below).

Applying Hume’s Claim to Itself

According to Hume’s quote above, the belief that we can make generalizations based on particular instances can never be “satisfied by our reason”. The problem, however, is that, according to our modern understanding of the world in physical terms, all we ever can generalize from, including when we make deductive inferences, is particular instances — particular spatiotemporally located states and processes found in our brains (equivalently, one could also say that all we can ever generalize from, as knowing subjects, are particular states of our own minds).

Thus, Hume’s statement that we can never prove such generalizations must also apply to itself, as it is itself a general claim based on a particular instance of reasoning taking place in Hume’s head in a particular place and time (indeed, Hume’s claim would appear to pertain to all generalizations).

So what justification could Hume possibly provide for this general claim of his? According to the claim itself, no proof can be given for it. Indeed, if Hume could provide a proof for his claim that it is impossible to find a proof for the validity of generalizations based on particular instances, then he would have falsified his own claim, as such a proof is the very thing that the claim holds not to exist. And such an alleged proof would thereby also undermine itself, as what it supposedly shows is its own non-existence.

This demonstrates that Hume’s claim is unprovable. That is, based on this particular instance of reasoning, we can draw the general conclusion that we will never be able to provide a proof for Hume’s claim. And thereby we have in fact proven Hume’s claim wrong, as we have thus provided a proof for a general claim that also pertains to that which lies beyond the reach of our discovery. Nowhere, neither in the realm of the discovered nor the undiscovered, can a proof for Hume’s claim be found.

So we clearly can prove some general claims about that which lies beyond the reach of our experience based on particular instances (of processes in our brains, say), and hence the claim that we cannot is simply wrong.


Yet one may object that this conclusion does not contradict what Hume in fact meant when he claimed that we cannot prove the validity of generalizations based on particular instances, since what he meant was rather that we cannot prove the validity of inductive generalizations such as “we have observed X so far, hence X will also be the case in the next instance/in general” — i.e. generalizations whose generality seems impossible to prove.

The problem, however, is that we can also turn this claim on itself, and indeed turn the problem of induction altogether on itself, as we did in a parenthetical statement above: the mere fact that we have not been able to prove the validity of any inductive claims of this sort so far does not imply that such a proof can never be found. In particular, the claim that we cannot prove the validity of any such inductive claim that seems impossible to prove is itself an inductive claim whose generality seems impossible to prove (i.e. it seems to rest on the argument: “we have not been able to prove the validity of any inductive claim of this nature so far, and hence we cannot[/we will never be able to] prove the validity of such a claim”).

And if we accept that this claim, the very claim that gives rise to the problem of induction, is itself a plausible claim that we have good reason to accept in general (or at least just good reason to believe that it will apply in the next moment), then we indeed do believe that we can have good reason to draw (at least some plausible) non-deductive generalizations based on particular instances, which is the very thing Hume’s argument is often believed to cast doubt upon. In other words, in order to even believe that there is a problem of induction in the first place, one must already assume that which this problem is supposed to question and be a problem for.

Indeed, one can make an argument along these lines that it is in fact impossible to give a coherent argument against (the overwhelming plausibility of at least some degree of) the uniformity of nature. For in order to even state an argument or doubt against it, one is bound to rely thoroughly on the very thing one is trying to question. For instance, that words will still mean the same in the next moment as they did in the previous one; that the argument one thought of in the previous moment still applies in the next one; that the problem one was trying to address in the previous moment still exists in the next; etc.

Thus, it actually seems impossible to reasonably, indeed even coherently, doubt that the world has at least some degree of uniformity, which itself seems to constitute a good argument and reason for believing in such uniformity. After all, that something cannot reasonably be doubted, or indeed doubted at all, usually seems a more than satisfying standard for believing it.

So to reiterate: If one thinks we have good reason to take the problem of induction seriously, or indeed just to believe that this problem still exists in this moment (since it has in previous ones), then one also thinks that we do have good reason to make (at least some plausible) non-deductive generalizations about that which lies “beyond the reach of our discovery” based on particular instances. In other words, if one takes the problem of induction seriously, then one does not take the problem of induction seriously at all.


How to then draw the most plausible inferences about that which “lies beyond the reach of our discovery” is, of course, far from trivial. Yet we should be clear that this is a separate matter entirely from whether we can draw such plausible inferences at all. And as I have attempted to argue here, we have absolutely no reason to think that we cannot, and good reason to think that we can.

“The Physical” and Consciousness: One World Conforming to Different Descriptions

My aim in this essay is to briefly explain a crucial aspect of David Pearce‘s physicalist idealist worldview. In particular, I seek to explain how a view can be both “idealist” and “physicalist”, yet still be a “property monist” view.

Pearce himself describes his view in the following way:

“Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions […]

So Pearce’s view is a monist, idealist view: reality is fundamentally experiential. And this reality also conforms to description in physical terms. Pearce is careful, however, to distinguish this view from panpsychism, which Pearce, in contrast to his own idealist view, considers a property dualist view:

“Panpsychism” is the doctrine that the world’s fundamental physical stuff also has primitive experiential properties. Unlike the physicalistic idealism explored here, panpsychism doesn’t claim that the world’s fundamental physical stuff is experiential. Panpsychism is best treated as a form of property-dualism.

How, one may wonder, is Pearce’s view different from panpsychism, and from property dualist views more generally? This is something I myself have struggled a lot to understand, and inquired him about repeatedly. And my understanding is the following: according to Pearce, there is only consciousness, and its dynamics conform to physical description. Property dualist views, in contrast, view the world as having two properties: the stuff of the world has insentient physical properties to which separate, experiential properties are somehow attached.

Pearce’s view makes no such division. Instead, on Pearce’s view, description in physical terms merely constitutes a particular (phenomenal) mode of description that (phenomenal) reality conforms to. So to the extent there is a dualism here, it is epistemological, not ontological.

The Many Properties of Your Right Ear

For an analogy that might help explain this point better, consider your right ear. What properties does it have? Setting aside the question concerning its intrinsic nature, it is clear that you can model it in various ways. One way is to touch it with your fingers, whereby you model it via your faculties of tactile sensation (or in neuroanatomical terms: with neurons in your parietal lobe). You may also represent your ear via auditory sensations, for example by hitting it and noticing what kind of sound it makes (a sensation mediated by the temporal lobe). Another way, perhaps the clearest and most practical way for beings like us, is to model it in terms of visual experience: to look at your right ear in the mirror, or perhaps simply imagine it, and thereby have a visual sensation that represents it (mediated by the occipital lobe).

[For most of us, these different forms of modeling are almost impossible to keep separate, as our touching our ears automatically induces a visual model of them as well, and vice versa: a visual model of an ear will often be accompanied by a sense of what it would be like to touch it. Yet one can in fact come a surprisingly long way toward being able to “unbind” these sensations with a bit of practice. This meditation is a good exercise in detaching one’s tactile sense of one’s hands from one’s visual model of them. This one goes even further, as it climaxes with a near-total dissolution of our automatic binding of different modes of experience into an ordered whole.]

Now, we may ask: which of these modes of modeling constitute the modeling we call “physical”? And the answer is arguably all of them, as they all relate to the manifestly external (“physical”) world. This is unlike, say, things that are manifestly internal, such as emotions and thoughts, which we do not tend to consider “physical” in this same way, although all our sensations are, of course, equally internal to our mind-brain.

“The physical” is in many ways a poorly defined folk term, and physics itself is not exempt from this ambiguity. For instance, what phenomenal mode does the field of physics draw upon? Well, it is certainly more than just the phenomenology of equations (to the extent this can be considered a separate mode of experience). It also, in close connection with how most of us think about equations, draws heavily on visuospatial modes of experience (I once carefully went through a physics textbook that covered virtually all of undergraduate level physics with the explicit purpose of checking whether it all conformed to such description, and I found that it did). And we can, of course, also describe your right ear in “physics” terms, for instance by measuring and representing its temperature, its spatial coordinates, its topology, etc. This would give us even more models of your right ear.


The deeper point here is that the same thing can conform to description in different terms, and the existence of such a multitude of valid descriptions does not imply that the thing described itself has a multitude of intrinsic properties. In fact, none of the modes of modeling an ear mentioned above say anything about the intrinsic properties of the ear; they only relate to its reflection, in the broadest sense.

And this is where some people will object: why believe in any intrinsic properties? Indeed, why believe in anything but the physical, “reflective”, (purportedly) non-phenomenal properties described above?

To me, as well as to David Pearce (and Galen Strawson and many others), this latter claim is self-undermining and senseless; like a person reading from a book who claims that the paper of the book they are reading from does not exist, only the text does. All these modes of modeling mentioned above, including all that we deem knowledge of “the physical” are phenomenal. The science we call “physics” is itself, to the extent it is known by anyone, found in consciousness. It is a particular mode of phenomenal modeling of the world, and thus to deny the existence of the phenomenal is also to deny the existence of our knowledge of “physics”.

Indeed, our knowledge of physics and “the physical” attests to this fact as clearly as it attests to anything: consciousness exists. It is a separate question, then, exactly how the varieties of conscious experience relate to descriptions of the world in physical terms, as well as what the intrinsic nature of the stuff of the world is, to the extent it has any. Yet by all appearances, it seems that minds such as our own conform to physical description in terms of what we recognize as brains, and, as with the example of your right ear, such a physical description can take many forms: a visual representation of a mind-brain, what it is like to touch a mind-brain, the number of neurons it has, its temperature, etc.

These are different, yet valid ways of describing aspects of our mind-brains. Yet like the descriptions of different aspects of an ear mentioned above, these “physical” descriptions, while all perfectly valid, still do not tell us anything about the intrinsic nature of the mind-brain. And according to David Pearce, the intrinsic nature of that which we (validly) describe in physical terms as “your brain” is your conscious mind itself. The apparent multitude of aspects of that which we recognize as “brains” and “ears” are just different modes of conscious modeling of an intrinsically monist, i.e. experiential, reality.


The view of consciousness explored here may seem counter-intuitive, yet I have argued elsewhere that using waves as a metaphor can help render it less unintuitive, perhaps even positively intuitive. I also say more in relation to this view of consciousness here.

Resources for Sustainable Activism

“Altruism is a marathon, not a sprint”

— Attributed to Robert Wiblin.


Avoiding burnout should be a high priority for activists, both for their own sake and for the sake of those they advocate for. The following is a short list of resources that I have found useful in this regard myself, and which I have often shared with friends who share my predicament.


Melanie Joy:

Sustainable Activism:


How Vegans Can Create Healthy Relationships and Communicate Effectively:


Jonathan Leighton:

Thriving in the Age of Factory Farming

Guided Meditation for Activists:


Brian Tomasik:

Is Utilitarianism Too Demanding?

Michael Bitton:

Investing in Yourself



Blog at

Up ↑