Narrative Self-Deception: The Ultimate Elephant in the Brain?

the elephant in the brain, n. An important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

The Elephant in the Brain is an informative and well-written book, co-authored by Kevin Simler and Robin Hanson. It explains why much of our behavior is driven by unflattering, hidden motives, as well as why our minds are built to be unaware of these motives. In short: because a mind that is ignorant about what drives it and how it works is often more capable of achieving the aims it was built to achieve.

Beyond that, the book also seeks to apply this knowledge to shed some light on many of our social institutions to show that they are often not mostly about what we think they are. Rather than being about high-minded ideals and other pretty things that we like to say they are about, our institutions often serve much less pretty, more status-driven purposes, such as showing off in various ways, as well as to help us better get by in a tough world (for instance, the authors argue that religion in large part serves to bind communities together, and in this way can help bring about better life outcomes for believers).

All in all, I think The Elephant in the Brain provides a strong case for supplementing one’s mental toolkit with a new, important tool, namely to continuously ask: how might my mind skillfully be avoiding confrontation with ugly truths about myself that I would prefer not to face? And how might such unflattering truths explain aspects of our public institutions and public life in general?

This is an important lesson, I think, and it makes the book more than worth reading. At the same time, I cannot help but feel that the book ultimately falls short when it comes to putting this tool to proper use. For the main critique that came to my mind while reading the book was that it seemed to ignore the biggest elephant in the brain by far — the elephant I suspect we would all prefer to ignore the most — and hence it failed, in my view, to take a truly deep and courageous look at the human condition. In fact, the book even seemed be a mouthpiece for this great elephant.

The great elephant I have in mind here is a tacitly embraced sentiment that goes something like: life is great, and we are accomplishing something worthwhile. As the authors write: “[…] life, for must of us, is pretty good.” (p. 11). And they end the book on a similar note:

In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

This seems to implicitly assume that what humans have managed to achieve, such as cooperating (i.e. two superpowers with nuclear weapons pointed at each other competing) their way to the moon, has been worthwhile all things considered. Might this, however, be a flippant elephant talking — rather than, say, a conclusion derived via a serious, scholarly analysis of our condition?

As a meta-observation, I would note that the fact that people often get offended and become defensive when one even just questions the value of our condition — and sometimes also accuse the one raising the question of having a mental illness — suggests that we may indeed be disturbing a great elephant here: something we would strongly prefer not to think too deeply about. (For the record, with respect to mental health, I think one can be among the happiest, most mentally healthy people on the planet and still think that a sober examination of the value of our condition yields a negative answer, although it may require some disciplined resistance against the pulls of a strong elephant.)

It is important to note here that one should not confuse the cynicism required for honest exploration of the human condition with misanthropy, as Simler and Hanson themselves are careful to point out:

The line between cynicism and misanthropy—between thinking ill of human motives and thinking ill of humans—is often blurry. So we want readers to understand that although we may often be skeptical of human motives, we love human beings. (Indeed, many of our best friends are human!) […] All in all, we doubt an honest exploration will detract much from our affection for [humans]. (p. 13)

Similarly, an honest and hard-nosed effort to assess the value of human life and the human endeavor need not lead us to have any less affection and compassion for humans. Indeed, it might lead us to have much more of both in many ways.

Is Life “Pretty Good”?

With respect to Simler’s and Hanson’s claim that “”[…] life, for must of us, is pretty good”, it can be disputed that this is indeed the case. According to the 2017 World Happiness Report, a significant plurality of people rated their life satisfaction at five on a scale from zero to ten, which arguably does not translate to being “pretty good”. Indeed, one can argue that the scale employed in this report is biased, in that it does not allow for a negative evaluation of life. And one may further argue that if this scale instead ranged from minus five to plus five (i.e. if one transposed this zero-to-ten scale so as to make it symmetrical around zero), it may be that a plurality would rate their lives at zero. That is, after all, where the plurality would lie if one were to make this transposition on the existing data measured along the zero-to-ten scale (although it seems likely that people would have rated their life satisfaction differently if the scale had been constructed in this symmetrical way).

But even if we were to concede that most people say that their lives are pretty good, one can still reasonably question whether most people’s lives indeed are pretty good, and not least reasonably question whether such reports imply that the human condition is worthwhile in a broader sense.

Narrative Self-Deception: Is Life As Good As We Think?

Just as it is possible for us to be wrong about our own motives, as Simler and Hanson convincingly argue, could it be that we can also be wrong about how good our lives are? And, furthermore, could it be that we not only can be wrong but that most of us in fact are wrong about it most of the time? This is indeed what some philosophers argue, seemingly supported by psychological evidence.

One philosopher who has argued along these lines is Thomas Metzinger. In his essay “Suffering“, Metzinger reports on a pilot study he conducted in which students were asked at random times via their cell phones whether they would relive the experience they had just before their phone vibrated. The results were that, on average, students reported that their experience was not worth reliving 72 percent of the time. Metzinger uses this data, which he admits does not count as significant, as a starting point for a discussion on how our grosser narrative about the quality of our lives might be out of touch with the reality of our felt, moment-to-moment experience:

If, on the finest introspective level of phenomenological granularity that is functionally available to it, a self-conscious system would discover too many negatively valenced moments, then this discovery might paralyse it and prevent it from procreating. If the human organism would not repeat most individual conscious moments if it had any choice, then the logic of psychological evolution mandates concealment of the fact from the self-modelling system caught on the hedonic treadmill. It would be an advantage if insights into the deep structure of its own mind – insights of the type just sketched – were not reflected in its conscious self-model too strongly, and if it suffered from a robust version of optimism bias. Perhaps it is exactly the main function of the human self-model’s higher levels to drive the organism continuously forward, to generate a functionally adequate form of self-deception glossing over everyday life’s ugly details by developing a grandiose and unrealistically optimistic inner story – a “narrative self-model” with which we can identify? (pp. 6-7)

Metzinger continues to conjecture that we might be subject to what he calls “narrative self-deception” — a self-distracting strategy that keeps us from getting a realistic view of the quality and prospects of our lives:

[…] a strategy of flexible, dynamic self­-representation across a hierarchy of timescales could have a causal effect in continuously remotivating the self-­conscious organism, systematically distracting it from the potential insight that the life of an anti-­entropic system is one big uphill battle, a strenuous affair with minimal prospect of enduring success. Let us call this speculative hypothesis “narrative self­-deception”. (p. 7)

If this holds true, such self-deception would seem to more than satisfy the definition of an elephant in the brain in Simler and Hanson’s sense: “an important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

To paraphrase Metzinger: the mere fact that we find life to be “pretty good” when we evaluate it all from the vantage point of a single moment does not mean that we in fact find most of our experiences “pretty good”, or indeed even worth (re)living most of the time, moment-to-moment. Our single-moment evaluations of the quality of the whole thing may well tend to be gross, self-deceived overestimates.

Another philosopher who makes a similar case is David Benatar, who in his book Better Never to Have Been argues that we tend to overestimate the quality of our lives due to well-documented psychological biases:

The first, most general and most influential of these psychological phenomena is what some have called the Pollyanna Principle, a tendency towards optimism. This manifests in many ways. First, there is an inclination to recall positive rather than negative experiences. For example, when asked to recall events from throughout their lives, subjects in a number of studies listed a much greater number of positive than negative experiences. This selective recall distorts our judgement of how well our lives have gone so far. It is not only assessments of our past that are biased, but also our projections or expectations about the future. We tend to have an exaggerated view of how good things will be. The Pollyannaism typical of recall and projection is also characteristic of subjective judgements about current and overall well-being. Many studies have consistently shown that self-assessments of well-being are markedly skewed toward the positive end of the spectrum. […] Indeed, most people believe that they are better off than most others or than the average person. (pp. 64-66)

Is “Pretty Good” Good Enough?

Beyond doubting whether most people would indeed say that their lives are “pretty good”, and beyond doubting that a single moment’s assessment of one’s quality of life actually reflects this quality particularly well, one can also question whether a life that is rated as “pretty good”, even in the vast majority of moments, is indeed good enough.

This is, for example, not necessarily the case on the so-called tranquilist view of value, according to which our experiences are valuable to the extent they are absent of suffering, and hence that happiness and pleasure are valuable to the extent they chase suffering away.

Similar to Metzinger’s point about narrative self-deception, one can argue that, if the tranquilist view holds true about how we feel the value of our experience moment-to-moment (upon closer, introspective inspection), we should probably expect to be quite blind to this fact. And interesting to note in this context is it that many of the traditions which have placed the greatest emphasis on paying attention to the nature of subjective experience moment-to-moment, such as Buddhism, have converged toward a view very similar to tranquilism.

Can the Good Lives Outweigh the Bad?

One can also question the value of our condition on a more collective level, by focusing not only on a single (self-reportedly) “pretty good” life but on all individual lives. In particular, we can question whether the good lives of some, indeed even a large majority, can justify the miserable lives of others.

A story that gives many people pause on this question is Ursula K. Le Guin’s The Ones Who Walk Away from Omelas. The story is about a near-paradisiacal city in which everyone lives deeply meaningful and fulfilling lives — that is, everyone except a single child who is locked in a basement room, forced to live a life of squalor:

The child used to scream for help at night, and cry a good deal, but now it only makes a kind of whining, “eh-haa, eh-haa,” and it speaks less and less often. It is so thin there are no calves to its legs; its belly protrudes; it lives on a half-bowl of corn meal and grease a day. It is naked. Its buttocks and thighs are a mass of festered sores, as it sits in its own excrement continually.

The story’s premise is that this child must exist in this condition for the happy people of Omelas to enjoy their wonderful lives, which then raises the question of whether these wonderful lives can in any sense outweigh and justify the miserable life of this single child. Some citizens of Omelas seem to decide that this is not the case: the ones who walk away from Omelas. And many people in the real world seem to agree with this decision.

Sadly, our world is much worse than the city of Omelas on every measure. For example, in the World Happiness Report cited above, around 200 million people reported their quality of life to be in the absolute worst category. If the story of Omelas gives us pause, we should also think twice before claiming that the “pretty good” lives of some people can outweigh the self-reportedly very bad lives of these hundreds of millions of people, many of whom end up committing suicide (and again, it should be remembered that a great plurality of humanity rated their life satisfaction to be exactly in the middle of the scale, while a significant majority rated it in the middle or lower).

Rating of general life satisfaction aside, one can also reasonably question whether anything can outweigh the many instances of extreme suffering that occur every single day, something that can indeed befall anyone, regardless of one’s past self-reported life satisfaction.

Beyond that, one can also question whether the “pretty good” lives of some humans can in any sense outweigh and justify the enormous amount of suffering humanity imposes on non-human animals, including the torturous suffering we subject more than a trillion fish to each year, as well as the suffering we impose upon the tens of billions of chickens and turkeys who live out their lives under the horrific conditions of factory farming, many of whom end their lives by being boiled alive. Indeed, there is no justification for not taking humanity’s impact on non-human animals — the vast majority of sentient beings on the planet — into consideration as well when assessing the value of our condition.


My main purpose in this essay has not been to draw any conclusions about the value of our condition. Rather, my aim has merely been to argue that we likely have an enormous elephant in our brain that causes us to evaluate our lives, individually as well as collectively, in overoptimistic terms (though some of us perhaps do not), and to ignore the many considerations that might suggest a negative conclusion. An elephant that leads us to eagerly assume that “it’s all pretty good and worthwhile”, and to flinch away from serious, sober-minded engagement with questions concerning the value of our condition, including whether it would be better if there had been no sentient beings at all.

In Defense of Nuance

(Also available as audiobook.)

The world is complex. Yet most of our popular stories and ideologies tend not to reflect this complexity. Which is to say that our stories and ideologies, and by extension we, tend to have insufficiently nuanced perspectives on the world.

Indeed, falling into a simple narrative through which we can easily categorize and make sense of the world — e.g. “it’s all God’s will”; “it’s all class struggle”; “it’s all the muslims’ fault”; “it’s all a matter of interwoven forms of oppression” — is a natural and extremely powerful human temptation. And something social constructivists get very right is that this narrative, the lens through which we see the world, influences our experience of the world to an extent that is difficult to appreciate.

So much more important, then, that we suspend our urge to embrace simplistic narratives to (mis)understand the world through. In order to navigate wisely in the world, we need to have views that reflect its true complexity; not views that merely satisfy our need for simplicity (and social signaling; more on this below). For although simplicity can be efficient, and to some extent is necessary, it can also, when too much too relevant detail is left out, be terribly costly. And relative to the needs of our time, I think most of us naturally err on the side of being expensively unnuanced, painting a picture of the world with far too few colors.

Thus, the straightforward remedy I shall propose and argue for here is that we need to control for this. We need to make a conscious effort to gain more nuanced perspectives. This is necessary as a general matter, I believe, if we are to be balanced and well-considered individuals who steer clear of self-imposed delusions and instead act wisely toward the betterment of the world. Yet it is also necessary for our time in particular. More specifically, it is essential in addressing the crisis that human conversation seems to be facing in the Western world at this point in time. A crisis that largely seems the result of an insufficient amount of nuance in our perspectives.

Some Remarks on Human Nature

There are certain facts about the human condition that we need to put on the table and contend with. These are facts about our limits and fallibility which should give us all pause about what we think we know — both about the world in general as well as ourselves in particular.

For one, we have a whole host of well-documented cognitive biases. There are far too many for me to list them all here, yet some of the most important ones are: confirmation bias (the tendency of our minds to search for, interpret, and recall information that confirms our pre-existing beliefs); wishful thinking (our tendency to believe what we wish were true); and overconfidence bias (our tendency to have excessive confidence in our own beliefs — in one study, people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time). And while we can probably all recognize these pitfalls in other people, it is much more difficult to realize and admit that they afflict ourselves as well. In fact, our reluctance to realize this is itself a well-documented bias, known as the bias blindspot.

Beyond realizing that we have fallible minds, we also need to realize the underlying context that has given rise to much of this fallibility, and which continues to fuel it, namely: our social context — both the social context of our evolutionary history as well as of our present condition. We humans are deeply social creatures, and it shows at every level of our design, including the level of our belief formation. And we need to be acutely aware of this if we are to form reasonable beliefs with minimal amounts of self-deception.

Yet not only are we social creatures, we are also, by nature, deeply tribal creatures. As psychologist Henri Tajfel showed, one need only assign one group of randomly selected humans the letter “A” and another randomly selected group the letter “B” in order for a surprisingly strong in-group favoritism to emerge. This method for studying human behavior is known as the minimal group paradigm, and it shows something about us that history should already have taught us a long time ago: that human tribalism is like gasoline just waiting for a little spark to be ignited.

This social and tribal nature of ours has implications for how we act and what we believe. It is, for instance, largely what explains the phenomenon of groupthink, which is when our natural tendency toward (in-)group conformity leads to a lack of dissenting viewpoints among individuals in a given group, which then, in turn, leads to poor decisions by these individuals.

Indeed, our beliefs about the world are far more socially influenced than we realize. Not just in the obvious way that we get our views from others around us — often without much external validation or testing — but also in that we often believe things in order to signal to others that we possess certain desirable traits, or that we are loyal to them. This latter way of thinking about our beliefs is quite at odds with how we prefer to think about ourselves, yet the evidence for this unflattering view is difficult to deny at this point.

As authors Robin Hanson and Kevin Simler argue in their recent book The Elephant in the Brain, we humans are strategically self-deceived about our own motives, including when it comes to what motivates our beliefs. Beliefs, they argue, serve more functions than just the function of keeping track of what is true of the world. For while beliefs surely do have this practical function, they also often serve a very different, very social function, which is to show others what kind of person we are and what kind of groups we identify with. This makes beliefs much like clothes, which have the practical function of keeping us warm while, for most of us, also serving the function of signaling our taste and group affiliations. And one of Hanson’s and Simler’s essential points is that we are not aware of the fact that we do this, and that there is an evolutionary reason for this: if we realized (clearly) that we believe certain things for social reasons, and if we realized that we display our beliefs with overconfidence, we would be much less convincing to those we are trying to convince and impress.

Practical Implications of Our Nature

This brief survey of the natural pitfalls and fallibilities of our minds is far from exhaustive, of course. But it shall suffice for our purposes. The bottom line is that we are creatures who naturally want our pre-existing beliefs confirmed, and who tend to display too high levels of confidence about these beliefs. We do this in a social context, and many of the beliefs we hold serve non-epistemic functions within this context, which include the tribal function of showing others how loyal we are to certain groups, as well as how worthy we are as friends and mates. In other words, we have a natural pull to impress our peers, not just with our behavior but also with our beliefs. And, for socially strategic reasons, we are quite blind to the fact that we do this.

So what, then, is the upshot of all of this? It is clear, I submit, that these facts about ourselves do have significant implications for how we should comport ourselves. In short, they imply that we have a lot to control for if we aspire to have reasonable beliefs — and our own lazy mind, with all its blindspots and craving for simple comfort, is not our friend in this endeavor. The fact that we are naturally biased and tendentious implies that we should doubt our own beliefs and motives. And it implies that we need to actively seek out the counter-perspectives and nuance that our confirmation bias, this vile bane of reason, so persistently struggles to keep us from accessing.

Needless to say, these are not the norms that govern our discourse at this point in time. Sadly, what plays out right now is mostly the unedited script of tribal, confirmation biasing human nature, unfazed by the prefrontal interventions that seem just about the only hope for our rewriting this script into something better.

The Virtues of the Good Conversationist

Let us elaborate a bit on the implications of our fallibility, and the precepts we should follow if we want to control for these unflattering tendencies and pitfalls of human nature. Recall the study cited above: people who reported to be 100 percent certain about their answer to a question were correct less than 85 percent of the time. The fact that we can be so wrong — more than 15 percent of the time when we claim perfect certainty(!) — implies, among other things, that when someone tells us we are wrong, we seem to have a prima facie reason to listen and try our best to understand what they are saying, as they may just be right. Of course, the tendency toward overconfidence will all but surely be shared by this other person as well, who could also be wrong. And our task then lies in finding out which it is. This is the importance of conversation. It is nothing less than the best tool we have, collectively, against being misguided. And that is why we have to become good conversationists.

What does it take to become that? At the very least, it requires an awareness of our biases, and a deliberate effort to counteract them.

Countering Confirmation Bias

To counteract our confirmation bias, we need to loosen our attachment to pre-existing beliefs, and to seek out viewpoints and arguments that may contradict them. The imperative of doing this derives from nothing less than the basic epistemic necessity of taking all relevant data into consideration rather than a small cherry-picked selection. For the truth is that we all cherry-pick data a little bit here and there in favor of our own position, and so by hearing from people with opposing views, and by examining their cherry-picked data and their particular emphasis and interpretation, we will, in the aggregate, tend to get a more balanced picture of the issue at hand.

And, importantly, we should strive to engage with these other views in a charitable way: by assuming good faith on behalf of the proponents of any position; by trying to understand their view as well as possible; and by then engaging with the strongest possible version of that position (i.e. the steel man rather than the straw man version of it). Indeed, it is difficult to overstate just how much the state of human conversation would improve if we all just followed this simple precept: be charitable.

Countering Wishful Thinking

Our propensity for wishful thinking should make us skeptical of beliefs that are convenient and which match up with what we want to be true. If we want there to be a God, and we believe there is one, then this should make us at least a little skeptical of this convenient belief. By extension, our attraction toward the wishful also implies that we should pay more attention to information and arguments that suggest conclusions which are inconvenient or otherwise contrary to what we wish were true. Do we believe the adoption of a vegan lifestyle would be highly inconvenient for us personally? Then we should probably expect to be more than a little biased against any argument in its favor, and indeed, if we suspect the argument has merit, be inclined to ignore it altogether rather than giving it a fair hearing.

Countering Overconfidence Bias

When it comes to correcting for our overconfidence bias, the key virtue to embrace is intellectual humility (or at least so it seems to me). That is, to admit and speak as though we have a limited and fallible perspective on things. In this respect, it also helps to be aware of the social factors that might be driving our overconfidence much of the time. As noted above, we often express certainty in order to signal to third parties, as well as to instill strong doubts in those we engage with. And we do this without being aware of it. This social function of confidence should lead us to update away from bravado and toward being more measured. Again: to be intellectually humble.

Countering In-Group Conformity

Another way in which social forces make us less than reasonable is by compelling us to conform to our peers. As hinted above, our beliefs are subject to in-group favoritism, which highlights the importance of being (especially) skeptical of the beliefs we share with groups that we affiliate closely with, and to practice playing the devil’s advocate against these beliefs. And, by extension, to try to be extra charitable toward the beliefs held by the notional out-group, whether it be “the Left” or “the Right”, “the religious” or “the atheists”.

Beyond that, we should also be aware that our minds likely often paint the out-group in an unfairly unfavorable light, viewing them as much less sincere and well-intentioned — one may even say more evil — than they actually are, however misguided (we may think) their particular views are. And it seems a natural temptation for us to try to score points by publicly broadcasting such a negative view of the out-group as a way of showing our in-group just how unlikely we are to change affiliation.

Thinking in Degrees of Certainty

It seems that we have a tendency to express our views in a very binary, 0-or-1 fashion. We tend to be either clearly for something or clearly against it, be it abortion, efforts to prevent climate change, the death penalty, or universal health care. And it seems to me that what we express outwardly is generally much more absolutist, i.e. more purely 0 or 1, than what happens inwardly, under the hood — perhaps even underneath our conscious awareness — where there is probably more conflicting data than what we are aware of and allow ourselves to admit.

I have observed this pattern in conversations: people will argue strongly for a given position which they continue to insist on, until, quite suddenly it seems, they say that they accept the opposite conclusion. In terms of their outward behavior, they went from 0 to 1 quite rapidly, although it seems likely that the process that took place underneath the hood was much more continuous — a more gradual move from 0 to 1, where the signal “express 1 now” was then passed at some threshold.

An extreme example of similar behavior found in recent events is that of Omarosa Manigault Newman, who was the so-called Director of African-American Outreach for Donald Trump’s presidential campaign in 2016. She went from describing Trump in adulating terms and calling him “a trailblazer on women’s issues”, to being strongly against him and calling him a racist and a misogynist. It seems unlikely that this shift was based purely on evidence she encountered after she made her adulating statements. There probably was a lot of information in her brain that contradicted the claim of Trump’s status as such a trailblazer, but which she ignored and suppressed. And the reason why is quite obvious: she had a political aim. She needed to broadcast the message that Trump was a good person to further a campaign and to further her own career tied to this campaign. It was about signaling first, not truth-tracking (which is not to say that she did not sincerely believe what she said, but her sincere belief was probably just conveniently biased).

The important thing to realize, of course, is that this applies to all of us. We are all inclined to be more like a politician than a scientist in many situations. In particular, we are all inclined to believe and express either a pure 0 or a pure 1 for social reasons. And the nature of these social reasons may vary. It may be about signaling opposition to someone who believes the opposite, or about signaling loyalty to a given group (few groups rally around low-credence claims). It may also be about signaling that we have a mind that is of a strong conviction. After all, doubt is generally not sexy. Just consider the words we usually associate with it, such as uncertainty, confusion, and indecision. Certainty, on the other hand, signals strength, and is commonly associated with more positive words such as decisiveness, confidence, resoluteness, and firmness. And so, for this reason as well, it only seems natural that we would generally be inclined to signal certainty rather than doubt, even when we do not possess anything close to justified certainty.

Fortunately, there exists a corrective for our tendency toward 0-or-1 thinking, which is to think in terms of credences along a continuum, ranging from 0 to 1. For one, this would constitute a more honest form of communication, in that it would force us to carefully weigh all the information that our brain keeps hidden from us, as well as to express its underlying credence in detail — as opposed to merely expressing whether this credence has crossed some given threshold. Yet perhaps even more significantly, thinking in terms of such a continuum would also help subvert the tribal aspect of our either-or thinking by placing us all in the same boat: the boat of degrees of certainty, in which the only thing that differs between us is our level of certainty in any given claim. For example, think how strange it would be for a religious believer to present their religious beliefs by saying that their credence in the existence of a God lies around 93 percent. This is a much weaker statement, in terms of its social signaling function, than a statement such as “I am a Christian”.

Such an honest, more detailed description of one’s beliefs is not good for keeping groups divided by different beliefs. Indeed, it is good for the exact opposite: it helps us move toward a more open and sincere conversation about what we in fact believe and why, regardless of our group affiliations.

Different Perspectives Can Be Equally True

There are two common definitions of the term “perspective” that are quite different, yet closely related at the same time. One is “a mental outlook/point of view”, while the other is “the art of representing three-dimensional objects on a two-dimensional surface”. And they are related in that the latter can be viewed as a metaphor for the former: our particular perspective, the representation of the world we call our point of view, is in a sense a limited two-dimensional representation of a more complex, multi-dimensional reality. A representation that is bound to leave out a lot of information about this reality. The best we can do, then, is to try to paint the two-dimensional canvas that is our mind so as to make it as rich and informative as possible about the complex and many-faceted world we inhabit.

And an important point for us to realize in our quest for more balanced and nuanced views, as well as for the betterment of human conversation, is to realize that seemingly conflicting reports of different perspectives on the same underlying reality can in fact all be true, as hinted by the following illustrations:


Image result for perspective

Image result for perspective

Image result for perspective perceptual


The same object can have very different reflections when viewed from different angles. Similarly, the same events can be viewed very differently by different people who each have their own unique dispositions and prior experiences. And these different views can all be true; John really does see X when he looks at this event, while Jane really does see Y. And, like the square- and circle-shaped reflections above, X and Y need not be incompatible. (A similar sentiment is reflected in the Jain doctrine of Anekantavada.)

And even when someone does get something wrong, they may nonetheless still be reporting the appearance of the world as it is revealed to them as honestly and accurately as they can. For example, to many of us, it really does seem as though the lines in the following picture are not parallel, although they in fact are:


Image result for visual illusions


Which is merely to state the obvious point that it is possible, indeed quite common, to be perfectly honest and wrong at the same time, which is worth keeping in mind when we engage with people whom we think are obviously wrong; they usually think they are right, and that we are obviously wrong — and perhaps even dishonest.

Another important point the visual illusion above hints at is that we should be careful not to confuse external reality with our representation of it. Our conscious experience of the external world is not, obviously, the external world itself. And yet we tend to speak as though it were.

This is an evolutionarily adaptive illusion no doubt, but it is an illusion nonetheless. All we ever inhabit is, in the words of David Pearce, our own world simulation, a world of conscious experience residing in our head. And given that we all find ourselves stuck in — or indeed as — such separate, albeit mutually communicating bubbles, it is not so strange that we can have so many disagreements about what we think reality is like. All we have to go on is our own private, phenomenal cartoon model of each other and the world at large; a cartoon model that may get many things right, but which is also sure to miss a lot of important things.

Framing Shapes Our Perspective

From the vantage point of our respective world simulations, we each interpret information from the external world with our own unique framing. And this framing in part determines how we will experience it, as demonstrated by the following illustration, where one can change one’s framing so as to either see a duck or a rabbit:


Image result for duck rabbit


As well as the following illustration where one’s framing determines whether one sees a cube from above or below — or indeed just a two dimensional pattern without depth:


Image result for dual perspective picture

Sometimes, as in the examples above, our framing is readily alterable. In other cases, however, it can be more difficult to just switch our framing, as when it comes to how different people with different life experiences will naturally interpret the same scenario in very different ways. For instance, a physicist might enter a room and see a lot of interesting physical phenomena there. Air consisting of molecules which bounce around in accord with the laws of thermodynamics; sound waves that travel adiabatically across the room; long lamps dangling in their natural frequency while emitting photons. An artistic person, in contrast, may enter the same room and instead see a lot of people. And this person may view these people as a sea of flowing creative potential in the process of being unleashed, inspired by deeply emotional music and a warm glowing light that fits perfectly with the atmosphere of the music.

Although these two perspectives on the events of this same room are very different, none of them are necessarily wrong. Indeed, they seem perfectly compatible, despite their representing what seems to be two very different cognitive styles — two different paradigms of thinking and perceiving, one may say. And what is important to realize is that a similar story applies to all of us. We all experience the world in different ways, due to our differing biological dispositions, life experiences, and vantage points. And while these different experiences are not necessarily incompatible, it can nonetheless be difficult to achieve mutual understanding between such differing perspectives.

Acknowledging Many Perspectives Is Not a Denial of Truth

It should be noted, however, that none of the above makes a case for the relativistic claim that there are no truths. On the contrary, what the above implies is indeed that it is a truth — as hard and strong as could be — that different individuals can have different perspectives and experiences in reaction to the same external reality, and that it is possible for such differing perspectives to all have merit, even if they seem in tension with each other. And to acknowledge this fact by no means amounts to the illogical statement that no given perspective can ever be wrong and make false claims about reality — that, sadly, is clearly all too common. This middle-position of rejecting both the claim that there is only one valid perspective and the claim that there are no truths is, I submit, the only reasonable one on offer.

And the fact that there can be merit in a plurality of perspectives implies that, beyond conceiving of our credences along a continuum ranging from 0 to 1, we also need to think in terms of a diversity of continua in a more general sense if we are to gain a fuller, more nuanced understanding that does justice to reality, including the people around us with whom we interact. More than just thinking in terms of shades of grey found in-between the two endpoints of black and white, we need to think in terms of many different shades of many different colors.

At the same time, it is also important to acknowledge the limits of our understanding of other minds and experiences we have not had. This does not amount to some obscure claim about how we each have our own, wholly incommensurable experiences, and hence that mutual understanding between individuals with different backgrounds is impossible. Rather, it is simply to acknowledge that psychological diversity is real, which implies that we should be careful to avoid the so-called typical mind fallacy, as well as to acknowledge that at least some experiences just cannot be conveyed faithfully with words alone to those who have not had them. And this does, at the very least, pose a challenge to the endeavor of communicating with and understanding each other. For example, most of us have never tried experiencing extreme forms of suffering, such as the experience of being burned alive. And beyond describing this class of experiences with thin yet accurate labels such as “horrible” and “bad”, most of us are surely very ignorant — luckily for us.

However, this realization that we do not know what certain experiences are like is in fact itself an important insight that does help expand and advance our outlook. For it at least helps us realize that our own understanding, as well as the range and variety of experiences we are familiar with, are far from exhaustive. With this realization in mind, we can look upon a state of absolute horror and admit that we have virtually no understanding of just how bad it is, which, I submit, comprises a significantly greater understanding than does beholding it with both the same absence of comprehension, and the absence of the admission of this absent comprehension. The realization that we are ignorant itself constitutes knowledge of sorts. The kind of knowledge that makes us rightfully humble.

Grains of Truth in Different Perspectives

Even when two different perspectives indeed are in conflict with each other, this does not imply that they are necessarily both entirely wrong, as there can still be significant grains of truth in both of them. Most of today’s widely endorsed perspectives and narratives make a wide range of claims and arguments, and even if not all of these stand up to scrutiny, many of them often do, at least when modified slightly. And part of being charitable is to seek out such grains of truth in a position one does not agree with. This can also help us realize which truths and plausible claims that might motivate people to support (what we consider) misguided views, and thus help further mutual understanding among us. Therefore, this seems a reasonable precept to follow as well: sincerely ask what might be the grains of truth in the views you disagree with. One can almost always find something, and often a good deal more than one would naively have thought.

As mentioned earlier, it is also possible for different perspectives to support what seems to be very different positions on the same subject without necessarily being wrong in any way; if they have different lenses, looking in different directions. Indeed, different perspectives on the same issue are often merely the result of different emphases which each focus on certain framings and sets of data rather than others. And thus seemingly incompatible perspectives may in fact all be right about the particular aspects of a given subject that they emphasize, which is why it is important to seek out different treatments of the same subject from multiple angles. Oftentimes, it is not that novel perspectives show our current perspective wrong, but merely that it is not sufficiently nuanced — i.e. that we have failed to take certain things into account, such as alternative framings, particular kinds of data, and critical counter-considerations.

This is, I believe, a common pattern in human conversation, and another sense in which we should be mindful of the possible existence of different grains of truth, namely: when different views on the same subject are all completely true, yet where each of them merely comprise a small grain in the larger mosaic that is the complete truth. And hence we should remind ourselves, as stated in the illustration above, that just because we are right does not mean that the person who says something else on the same subject is wrong.

Having made a general case for nuance, let us now turn our eyes toward our time in particular, and why it is especially important to actively seek to be nuanced and charitable today.

Our Time Is Different

Every period in history likely sees itself as uniquely unique. Yet in terms of how humanity communicates, it is clear that our time indeed is a highly unique one. For never before in history has human communication been so screen-based as it is today. Or, expressed equivalently: never before has so much of our communication been without face-to-face interaction. And this has significant implications for how and what we communicate.

It is clear that our brains process communication through a screen in a very different way. Writing a message in a Facebook group consisting of a thousand people does not, for most of us, feel remotely the same as delivering the same message in front of a thousand people crowd. And a similar discrepancy between the two forms of communication is found when we interact with just a single person, which is no wonder. Communication through a screen consists of a string of black and white symbols. Face-to-face interaction, in contrast, is composed of multiple streams of information. We read off important cues from a person’s face and posture, as well as from the tone and pace of their voice.

All this information provides a much more comprehensive, one might indeed say more nuanced, picture of the state of mind of the person we are interacting with. We get the verbal content of the conversation (as we would through a screen), plus a ton of information about the emotional state of the other. And beyond being informative, this information also serves the purpose of making the other person relatable. It makes the reality of their individuality and emotions almost impossible to deny, which is much less true when we communicate through a screen.

Indeed, it is as though these two forms of communication activate entirely different sets of brain circuits. Not only in that we communicate via a much broader bandwidth and likely see each other as more relatable when we communicate face-to-face, but also in that face-to-face communication naturally motivates us to be civil and agreeable. When we are in the direct physical presence of someone else, we have a strong interest in keeping things civil enough to allow our co-existence in the same physical space. When we interact through a screen, however, this is no longer a necessity. The notional brain circuitry underlying peaceful co-existence with antagonists can more safely be put on stand-by mode.

The reality of these differences between the two forms of communication has, I would argue, some serious implications. First of all, it highlights the importance of being aware that these two forms of communication indeed are very different, and that we are, in various ways, quite handicapped communicators when we communicate through a screen, often entering a state of mind that perhaps only a sociopath would be able to maintain in a face-to-face interaction. A handicap that further implies that we should be even more aware of the tendencies reviewed above when interacting through a screen, as these tendencies then become much easier and more tempting to engage in. It is (even) more difficult to relate to those who disagree with us, and we have (even) less of an incentive to understand them properly and be civil. Which is to say that it is (even) more difficult to be charitable. Written communication through a screen makes it easier than ever before to paint the out-group antagonists we interact with in an unreasonably unfavorable light.

And our modern means of communication arguably also make it easier than ever before to not interact with the out-group at all, as the internet has made it possible for us to diverge into our own respective in-group echo chambers to an extent not possible in the past. It is therefore now easy to end up in communities in which we continuously echo data that supports our own narrative, which ultimately gives us a one-sided and distorted picture of reality. And while it may be easier than ever to find counter-perspectives if we were to look for them, this is of little use if we mostly find ourselves collectively indulging in our own in-group confirmation bias. As we often do. For instance, feminists may find themselves mostly informing each other about how women are being discriminated against, while men’s rights activists may disproportionally share and discuss ways in which men are discriminated against. And so by joining only one of these communities, one is likely to end up with a skewed, insufficiently nuanced view of reality.

This mode of interaction has serious sociological implications. Indeed, the change in our style of interaction brought about by the internet is probably in large part why, in spite of the promise technology seemed to hold to connect us with each other, we now appear increasingly balkanized, divided along various lines in ways that feed into our tribal nature all too well. Democrats and republicans, for example, increasingly see each other as a “threat to the nation’s well-being” — significantly more so than they did even just ten years ago. This is a real problem that does not seem to be going away on its own. And one of the greatest hopes we have for improving this situation is, I submit, to become aware of and actively try to control for our own pitfalls. Especially when we interact through screens.

With all the information we have reviewed thus far in mind, let us now turn to some concrete examples of heated issues that divide people today, and where more nuanced perspectives and a greater commitment to being charitable are desperately needed. (I should note, however, that given the brevity of the following remarks, what I write here on these issues is, needless to say, itself bound to fail to express a highly nuanced perspective, as that would require a longer treatment. Nonetheless, the following brief remarks will at least gesture at some ways in which we can generally be more nuanced about these topics.)

Sex Discrimination

As hinted above, there are two groups that seem to tell very different stories about the state of sex discrimination in our world today. On the one hand there are the feminists, who seem to argue that women generally face much more discrimination than men, and on the other, there are the so-called men’s rights activists, who seem to argue that men are, at least in some parts of the world, generally the more discriminated sex. And these two claims surely cannot both be right, can they?

If one were to define sex discrimination in terms of some single general measure, a “General Discrimination Factor”, then no, they could not both be right. Yet if one instead talks about concrete forms of discrimination, then it is entirely possible, and indeed clearly the case, that women are discriminated against more than men in some respects, while men face more discrimination in other respects. And it is arguably also much more fruitful to talk about such concrete cases than it is to talk about discrimination “in general”. (In response to those who insist that it is obvious that women face more discrimination everywhere, almost regardless of how one constructs such a general measure, I would recommend watching the documentary The Red Pill, and, for a more academic treatment, reading David Benatar’s The Second Sexism.)

For example, it is a well-known fact that women have, historically, been granted the right to vote much later than men have, which undeniably constitutes a severe form of discrimination against women. Similarly, women have also historically been denied the right to take a formal education, and they still are in many parts of the world. In general, women have been denied many of the opportunities that men have had, including access to professions in which they were clearly more than competent to contribute. These are all undeniable facts about undeniably severe forms of discrimination.

However, tempting as it may be to infer, none of this implies that men have not also faced severe discrimination in the past, nor that they evade such discrimination today. For example, it is generally only men who have been subject to conscription — i.e. forced duty to enlist for state service, such as in the military. Historically, as well as today, men have been forced by law to join the military and go to war, often without returning — whether they wanted to or not (sure, some men wanted to join the military, yet the fact that some men wanted to do this does not imply that making it compulsive for virtually all men and only men is not discriminatory; as a side note, it should be noted that many feminists have criticized conscription).

Thus, at a global level, it is true to say that, historically as well as today, women have generally faced more discrimination in terms of their rights to vote and pursue an education, as well as in their professional opportunities in general, while men have faced more discrimination in terms of state-enforced duties.

Different forms of discrimination against men and women are also present at various other levels. For example, in one study where the same job application was sent to different scientists, and where half of the applications had a female name on them, the other half a male name, the “female applicants” were generally rated as less competent, and the scientists were willing to pay the “male applicants” more than 14 percent more.

The same general pattern seems reported by those who have conducted a controlled experiment in being a man and a women from “the inside”, namely from transgender men (those who have transitioned from being a woman to being a man). Many of these men report being viewed as more competent after their transition, as well as being listened to more and interrupted less. This also fits with the finding that both men and women seem to interrupt women more than they interrupt men.

At the same time, many of these transgender men also generally report that people seem to care less about them now that they are men. As one transgender man wrote about the change in his experience:

What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.

Such anecdotal reports seem well in line with the finding that both men and women show more aggression toward men than women, as well as with recent research (see page 137) conducted by social psychologist Tania Reynolds, which among other things found that:

[…] female harm or disadvantage evoked more sympathy and outrage and was perceived as more unfair than equivalent male harm or disadvantage. Participants more strongly blamed men for their own disadvantages, were more supportive of policies that favored women, and donated more to a female-only (vs male-only) homeless shelter. Female participants showed a stronger in-group bias, perceiving women’s harm as more problematic and more strongly endorsed policies that favored women.

As these examples show, it seems that men and women are generally discriminated against in different ways. And it is worth noting that these different forms of discrimination are probably in large part the natural products of our evolutionary history rather than some deliberate, premeditated conspiracy (which is obviously not to say that they are ethically justified).

Yet deliberation and premeditation is exactly what is required if we are to step beyond such discrimination. More generally, what seems required is that we get a clearer view of the ways in which women and men face discrimination, and that we then take active steps toward remedying these problems. Something that is only possible if we allow ourselves enough of a nuanced perspective to admit that both women and men are subject to serious discrimination and injustice.


It seems that many progressives are inspired by the theoretical framework called intersectionality, according to which we should seek to understand many aspects of the modern human condition in terms of interlocking forms of power, oppression, and privilege. One problem with relying on this framework is that it can easily become like only seeing nails when all one has is a hammer. If one insists on understanding the world predominantly in terms of oppression and social privilege, one risks seeing it in many places where it is not, as well as overemphasizing its relevance in many cases — and, by extension, to underemphasize the importance of other factors.

As with most popular ideas, there is no doubt a significant grain of truth in some of what intersectional theory talks about, such as the fact that discrimination is a very real phenomenon, that privilege is too, and that both of these phenomena can compound. Yet the narrow focus on only social explanations and versions of these phenomena means that intersectional theory misses a lot about the nature of discrimination and privilege. For example, some people are privileged to be born with genes that predispose them to be very happy, while others have genes that dispose them to have chronic depression. Such two people may be of the same race, gender, and sexuality, and they may be equally able-bodied. Yet such two people will most likely have very different opportunities and quality of life. A similar thing can be said about genetic differences that predispose individuals to have a higher or lower IQ, as well as about genetic differences that make people more or less physically attractive.

Intersectional theory seems to have very little to say about such cases, even as these genetic factors seem able to impact opportunities and quality of life to a similar degree as discrimination and social exclusion. Indeed, it seems that intersectional theory actively ignores, or at the very least underplays, the relevance of such factors — what may be called biological privileges in general — perhaps because they go against the tacit assumption that inequity and other bad things must be attributable to an oppressive agent or social system in some way, as opposed to just being the default outcome one should expect to find in an apathetic universe.

In general, it seems that intersectional theory significantly underestimates the importance of biology, which is, of course, by no means a mistake that is unique to intersectionality in particular. And it is indeed understandable how such an underestimation can emerge. For the truth is that many of the most relevant human traits, including those of personality and intelligence, are strongly influenced by both genetic and environmental factors. Indeed, around 40-60 percent of the variance of such traits tends to be explained by genetics, and, consequently, the amount of variance explained by the environment lies roughly in this range as well. This means that, with respect to these traits, it is both true to say that cultural factors are extremely significant, and to say that biological factors are extremely significant. And the mistake that many seem to make, including many proponents of intersectionality, is to believe that one of these truths rules out the other.

Another critique one can direct toward intersectional theory is that it often makes asymmetrical claims about how one group, “the privileged”, are unable to understand the experiences of another group of individuals, “the unprivileged”, whatever form the privilege and lack thereof may take. Yet it is rarely conceded that this argument can also, with roughly as much plausibility, be made the other way around: that the (allegedly) unprivileged might not fully understand the experience of the (allegedly) privileged, and that they may, in effect, overstate the differences in their experience, and overstate how easy the (allegedly) privileged in fact have it. A commitment to intellectual openness and honesty would at least require us to not dismiss this possibility out of hand.

A similar critique that intersectional theorists ought to contend with is that some of the people whom intersectional theory maintains are discriminated against and oppressed themselves argue that they are not, and some indeed further argue that many of the solutions and practical steps supported by intersectional theorists are often harmful rather than beneficial. Such voices must, at least, be counted as weak anomalies relative to the theory, and considered worthy of serious engagement.

More generally, a case can be made that intersectional theory greatly overemphasizes group membership and identities in its analyses of and attempts to address societal problems. As Brian Tomasik notes:

[…] I suspect it’s tempting for our tribalistic primate brains to overemphasize identity membership and us-vs.-them thinking when examining social ills, rather than just focusing on helping people in general with whatever problems they have. For example, I suspect that one of the best ways to help racial minorities in the USA is to reduce poverty (such as through, say, universal health insurance), rather than exploring ever more intricate nuances of social-justice theory.

A regrettable complication that likely bolsters the focus of intersectionalists is that many people seem to flatly deny that there are any grains of truth to any of the claims intersectional theory makes. Some claim, for instance, that there is no such thing as being transgendered, and that there barely is such a thing as racial or sex discrimination in the Western world today. Rather than serving as a meaningful critique of the overreaches of intersectionality, such unnuanced and ill-informed statements seem likely to only help convince intersectionalists that they are uniquely right while others are dangerously wrong, as well as to suggest to them that more radical tactics may be needed, since current tactics clearly do not work to make other people see basic reality for what it is.

This speaks to the more general point that if we make measured views a rarity, and convince ourselves that all one can do is join either team A or team B — e.g. “camp discrimination exists” or “camp discrimination does not exist” — then we only push people toward division. We risk finding ourselves in a run-away spiral where people try to eagerly signal that they do not belong to the other team, which may in turn push us toward ever more extreme views. The alternative option to this tribal game is to simply aspire toward, and express, measured and nuanced views. That might just be the best remedy against such polarization and toward reasonable consensus. Whether our tribal brains indeed want such a consensus is, of course, a separate question.

A final critique I would direct at mainstream intersectional theory is that, despite its strong focus on unjustified discrimination, it nonetheless generally fails to acknowledge and examine what is, I have argued, the greatest, most pervasive, and most harmful form of discrimination that exists today, namely: speciesism, the unjustified discrimination against individuals based on their species membership. The so-called argument from species overlap is rarely examined, nor are the implications that follow, including when it comes to what equality in fact entails. This renders mainstream versions of intersectionality, as a theory of discrimination against vulnerable individuals, a complete failure.

Political Correctness

Another controversial issue closely related to intersectionality is that of political correctness. What do we mean by political correctness? The answer is actually not straightforward, since the term has a rather complex history throughout which it has had many different meanings. Yet one sense of the term that was at least prominent at one point refers simply to conduct and speech that embodies fairness and common decency toward others, especially in a way that avoids offending particular groups of people. In this sense of the term, political correctness is about not referring to people with ethnic slurs, such as “nigger” and “paki”, or homophobic slurs, such as “faggot” and “dyke”. A more recent sense of the term, in contrast, refers to instances where such a commitment to not offend people has been taken too far (in the eyes of those who use the term), which is arguably the sense in which it is most commonly used today.

This then leads us to what seems the quintessential point of contention when it comes to political correctness, namely: what is too far? What does the optimum level of decency entail? And the only reasonable answer, I believe, will have to be a nuanced one found between the two extremes of “nothing is too offensive” and “everything is too offensive”.

Some seem to approach this subject with the rather unnuanced attitude that feelings of being offended do not matter in any way whatsoever. Yet this view seems difficult to maintain, however, at least if one is called a pejorative name in an unjoking manner oneself. For most people, such name-calling is likely to hurt — indeed, it can easily hurt quite a lot. And significant amounts of hurt and unpleasantness do, I submit, matter. A universe with fewer, less intense feelings of offense is, other things being equal, better than a universe with more, more intense feelings of offense.

Yet the words “other things being equal” should not be missed here. For the truth is that there can be, indeed there clearly is, a tension between 1) risking to offend people and 2) talking freely and honestly about the realities of life. And it is not clear what the optimal balance is.

Yet what is quite clear, I would argue, is that if we cannot talk in an unrestricted way about what matters most in life, then we have gone too far. In particular, if we cannot draw distinctions between different kinds of discrimination and forms of suffering, and if we are not allowed to weigh these ills against each other to assess which are most urgent, then we have gone too far. For if we deny ourselves a clear sense of proportion with respect to the problems of the world, we end up undermining our ability to sensibly prioritize our limited resources in a world that urgently demands reasonable prioritization. And this is, I submit, much too high a price to pay to avoid the risk of offending people.

Relationship Styles and Promiscuity

Another subject that a lot of people seem to express quite strong and unnuanced positions on is that of sexual promiscuity and relationship styles. For example, some claim that strict monogamy is the only healthy and viable choice for everybody, while others seem to make more or less the same claim about polyamory: that most people would be happier if they were in loving, sexual relationships with more than one person, and that only our modern culture prevents us from realizing this. Similar opinions can be found on the subject of casual sex. Some say it is not a big deal, while others say it is — for everyone.

An essential thing to acknowledge on this subject, it seems, is the reality of individual differences. Most of these strong opinions seem to arise from the fallacious assumption that other people are significantly much like ourselves — i.e. the typical mind fallacy. The truth is that some may well thrive best in monogamous relationships, while others may thrive best in polyamorous relationships; some may well thrive having casual sex, some may not. And in the absence of systematic studies, it is difficult to say which distribution people fall along in these respects — in terms of what circumstance people thrive best in — as well as how much this distribution can be influenced by culture.

None of this is to say that there is no such thing as human nature when it comes to sexuality, but merely that it should be considered an open question just what this nature is exactly, and how much plasticity and individual variation it entails. And we should all admit this much.

Politics and Making the World a Better Place

The subjects of politics and “how to make the world a better place” more generally are both subjects on which people tend to have strong convictions, limited nuance, and powerful incentives to signal group loyalty. Indeed, they are about as good examples as any of subjects where it is important to be charitable and actively seek out nuance, as well as to acknowledge one’s own biased nature.

A significant step we can take toward thinking more clearly about these matters is to adopt the aforementioned virtue of thinking in terms of continuous credences. Just as the expression of a “merely” high credence in the existence of the Christian God is more conducive to open-minded conversation, so is having a “merely” high credence in any given political ideology, principle, or policy likely more conducive to honest and constructive conversations and greater mutual updating.

If nothing else, the fact that the world is so complex implies that we will at least have considerable uncertainty about what the consequences of our actions will be. In many cases, we simply cannot know with great certainty which policy or candidate is going to be best (relative to any set of plausible values) all things considered. This suggests that our strong convictions about how a given political candidate or policy is all bad, and about how immeasurably greater the alternatives would be, are likely often overstated. More generally, it implies that our estimates of which actions that are best to take, in the realm of politics in particular as well as with respect to improving the world in general, should probably be more measured and humble than they tend to be.

For example: what is your credence that Donald Trump was a better choice (with respect to your core values) than Hillary Clinton for the US presidency in 2016? I suspect most people’s credence on this question is either much too low or much too high relative to what can be justified. For even if one thinks his influence is clearly positive or clearly negative in the short term, this still leaves open the question of what the long-term effects will be. If the short-term effects are negative, for instance, it does not seem entirely implausible that there will be a counter-reaction in the future whose effects will end up being better in the long term, or vice versa. This consideration alone should dampen one’s credence somewhat — away from the extremes and closer toward the middle. A similar argument could be made about grave atrocities and instances of extreme suffering occurring today and in the near future: although it seems unlikely, we cannot exclude that these may in fact lead to a future with fewer atrocities and less suffering in the long term. (Note, however, that none of this implies that one should not fight hard for what one believes to be the best thing; even if one has only, say, a 60 percent credence in some action being better than another, it can still make perfect sense to push very hard for this seemingly better option.)

Or, to take another concrete example: would granting everyone a universal basic income be better (relative to your values) than not doing so? Again, being absolutely certain in either a positive or a negative answer to this question is hardly defensible. More reasonable, it seems, would it be to maintain a credence that lies somewhere in-between. (And in relation to what one’s underlying values are, I would argue that this is one of the very first things we need to reflect upon if we are to make a reasonable effort toward making the world a better place.)

A similar point can be made about existing laws and institutions. When we are young and radical, we have a tendency to find existing laws and social structures to be obviously stupid compared to the brilliant alternatives we ourselves envision. Yet, in reality, our knowledge of the roles played by these existing systems, as well as the consequences of our proposed alternatives, will tend to be quite limited in most cases. And it seems wise to admit this much, and to adjust our credences and plans of action accordingly.

A related pitfall worth avoiding is that of believing a single political candidate or policy to have purely good or purely bad effects; such an outcome seems extraordinarily unlikely. In the words of economist Thomas Sowell, there are no perfect solutions in the real world, only trade-offs. Similarly, it is also worth steering clear of the tendency to look to a single intellectual for the answers to all important questions. For the truth is that we all have blindspots and false beliefs, and virtually everyone is going to be ignorant of things that others would consider common knowledge. Indeed, no single person can read and reflect widely and deeply enough to be an expert on everything of importance. Expertise requires specialization, which means that we must look to different experts if we are to find expert views on a wide range of topics. In other words, the quest for a more complete and nuanced outlook requires us to engage with many different thinkers from very different disciplines.

The preceding notes about ways in which we could be more nuanced on various concrete topics are, of course, merely scratching the surface. Yet they hopefully do serve to establish the core point that nuance is essential if we are to gain a balanced understanding of virtually any complicated issue.

Can We Have Too Much Nuance?

In a piece that argues for the virtues of being nuanced, it seems worth asking whether I am being too one-sided. Might I not be overstating the case in its favor, and should I not be a bit more nuanced about the utility of nuance itself? Indeed, might we not be be able to have too much nuance in some cases?

I would be the first to admit that we probably can have too much nuance in many cases. I will grant that in situations that call for quick action, and where there is not much time to build a nuanced perspective, it may well often be better to act on one’s limited understanding rather than a more nuanced, yet harder-won picture. There are many situations like this, no doubt. Yet at the level of our public conversations, this is not often the case. We usually do have time to build a more nuanced picture, and we are rarely required to act promptly. Indeed, we are rarely required to act at all. And, unthinkable as it may seem, it could just be that expressions of agnosticism, and perhaps no public expressions at all on a given hot topic, would tend to serve everyone better than expressions of poorly considered views.

One could perhaps attempt to make a case against nuance with reference to examples where near-equal weight is granted to all considerations and perspectives — reasonable and less reasonable ones alike. This, one may argue, is a bad thing, and surely demonstrates that there is such a thing as too much nuance. Yet while I would agree that weighing arguments so blindly and undiscerningly is unreasonable, I would not consider this an example of too much nuance as such. For being nuanced does not mean giving equal weight to all arguments a posteriori, after all the relevant arguments have been presented. Instead, what it requires is that we at least consider these relevant arguments, and that we strive to be minimally prejudiced toward them a priori. In other words, the quest for appropriately nuanced perspectives demands us to grant equality of opportunity to all arguments, not equality of outcome.

Another objection one may be tempted to raise against being nuanced and charitable is that it implies that we should be submissive and over-accommodating. This does not follow, however. For to say that we should be charitable is not to say that we cannot be firm in our convictions when such firmness is justified, much less that we should ever tolerate disrespect or unfair treatment; we should not. We have no obligation to tolerate bullies and intimidators, and if someone repeatedly fails to act in a respectful, good-faith manner, we have every right, and arguably even good reason, to remove ourselves from them. After all, the maxim “assume the other person is acting in good faith” does not entail that we should not update this assumption as soon as we encounter evidence that contradicts it. And to assert one’s boundaries and self-respect in light of such updating is perfectly consistent with a commitment to being charitable.

A more plausible critique against being nuanced is that it might in some cases be strategically unwise, and that one should instead advocate for one’s views in an unnuanced, polemic manner in order to better achieve one’s objectives, at least in some cases. I think this is a decent point. Yet I think there are also good reasons to think that this will rarely be the optimal strategy when engaging in public conversations. First of all, we should acknowledge that, even if we were to grant this style of communication superior in a given situation, it still seems advantageous to possess a nuanced understanding of the counter-arguments. For, if nothing else, such an understanding would seem to make one better able to rebut these arguments, regardless of whether one then does so in a nuanced way or not.

And beyond this reason to acquire a nuanced understanding, there are also very good reasons to express such an understanding, as well as to treat the counter-arguments in as fair and measured a way as one can. One reason is the possibility that we might ourselves be wrong, which means that, if we want an honest conversation through which we can make our beliefs converge toward what is most reasonable, then we ourselves also have an interest in seeing the best and most unbiased arguments for and against different views. And hence we ourselves have an interest in moderating our own bravado and confirmation bias which actively keep us from evaluating our pre-existing beliefs as impartially as we should, as well as an interest in trying to express our own views in a measured and nuanced fashion.

Beyond that, there are also reasons to believe that people will be more receptive to one’s arguments if one communicates them in a way that demonstrates a sophisticated understanding of relevant counter-perspectives, and which lays out opposing views as strongly as possible. This will likely lead people to conclude that one’s perspective is at least built in the context of a sophisticated understanding, and it might thus plausibly be read as an honest signal that this perspective may be worth listening to.

Finally, one may object that some subjects just do not call for any nuance whatsoever. For example, should we be nuanced about the Holocaust? This is a reasonable point. Yet even here, I would argue that nuance is still important, in various ways. For one, if we do not have a sufficiently nuanced understanding of the Holocaust, we risk failing to learn from it. For example, to simply believe that the Germans were evil would appear the dangerous thing, as opposed to realizing that what happened was the result of primitive tendencies that we all share, as well as the result of a set of ideas which had a strong appeal to the German people for various reasons — reasons that we should seek to understand.

This is all descriptive, however, and so none of it implies taking a particularly nuanced stance on the ethical status of the Holocaust. Yet even in this respect, a fearless search for nuance and perspective can still be of great importance. In terms of the moral status of historical events, for instance, we should have enough perspective to realize that the Holocaust, although it was the greatest mass killing of humans in history, was by no means the only one; and hence that its ethical status is arguably not qualitatively unique compared to other similar events of the past. Beyond that, we should also admit that the Holocaust is not, sadly, the greatest atrocity imaginable, neither in terms of the number of victims it had, nor in terms of the horrors imposed on its victims. Greater atrocities than the Holocaust are imaginable. And we ought to both seriously contemplate whether such atrocities might indeed be actual, as well as to realize that there is a risk that atrocities that are much greater still may emerge in the future.


Almost everywhere one finds people discussing contentious issues, nuance and self-scrutiny seem to be in short supply. And yet the most essential point of this essay is not really one about looking outward and pointing fingers at others. Rather, the point is, first and foremost, that we all need to look into the mirror and ask ourselves some uncomfortable questions. Self-scrutiny can, after all, only be performed by ourselves.

“How might I be obstructing my own quest for truth?”

“How might my own impulse to signal group loyalty bias my views?”

“What beliefs of mine might mostly serve social rather than epistemic functions?”

Indeed, we all need to take a hard look in the mirror and let ourselves know that we are sure to be biased and wrong in many ways. And more than just realizing that we are wrong and biased, we also need to realize that we are limited creatures. Creatures who view the world from a limited vantage point from which we cannot fully integrate and comprehend all perspectives and modes of consciousness — least of all those we have never been close to experiencing.

We need to remind ourselves, continually and insistently, that we should be charitable and measured, and that we should seek out the grains of truth that may exist in different views so as to gain a more nuanced understanding that better reflects the true complexity of the world. Not least ought we remind ourselves that our brains evolved to express overconfident and unnuanced views for social reasons — especially in ways that favor our in-group and oppose our out-group. And we need to do a great deal of work to control for this. We should seek to scrutinize our in-group narrative, and be especially charitable to the out-group narrative.

None of us will ever be perfect in these regards, of course. Yet we can at least all strive to do better.

The (Non-)Problem of Induction

David Hume claimed that it is:

[…] impossible for us to satisfy ourselves by our reason, why we should extend that experience beyond those particular instances, which have fallen under our observation. We suppose, but are never able to prove, that there must be a resemblance betwixt those objects, of which we have had experience, and those which lie beyond the reach of our discovery.

And this then gives rise to the problem of induction: how can we defend assuming the so-called uniformity of nature that we take to exist when we generalize our limited experience to that which lies “beyond the reach of our discovery”? For instance, how can we justify our belief that the world of tomorrow will, at least in many ways, resemble the world of yesterday? Indeed, how can we justify believing that there will be a tomorrow at all?

A thing worth highlighting in response to this problem is that, even if we were to assume that we have no justification for believing in such uniformity of nature, this would not imply, as may perhaps seem natural to suppose, that we thereby have justification for believing the opposite: that there is no uniformity of nature. After all, to say that the patterns we have observed so far do not predict anything about states and events elsewhere would also amount to a claim about that which lies “beyond the reach of our discovery”, and so this claim seems to face the same problem.

The claims 1) “there is a certain uniformity of nature” and 2) “there is no uniformity of nature” are both hypotheses about the world. And if we look at the limited part of the world about which we do have some knowledge, it is clear that 1) is true about it: patterns at one point in (known parts of) time and space do indeed predict a lot about patterns observed elsewhere.

Does this then mean that the same will hold true of the part of the world that lies beyond the reach of our discovery? One can reasonably argue that we do not have complete certainty that it will (indeed, one can reasonably argue that we should not have complete certainty about any claim our fallible mind happens to entertain). Yet if we reason as scientists — probabilistically, endeavoring to build the picture of the world that seems most plausible in light of all the available evidence — then it does indeed seem justifiable to say that hypothesis 1) seems much more likely to be true of that which lies “beyond the reach of our discovery” than does hypothesis 2) [not least because to say that hypothesis 2) holds true would amount to assuming an extraordinary uniqueness of the observed compared to the unobserved, whereas believing hypothesis 1) merely amounts to not assuming such an extraordinary uniqueness].

And if we think in this way — in terms of competing hypotheses — then Hume’s problem of induction suddenly seems rather vacuous. “You cannot prove that any given hypothesis of this kind is correct.” This seems true (although the fact that we have not found such a proof yet does not imply that one cannot be found), but also quite irrelevant, since a deductive proof is not required in order for us to draw reasonable inferences. To say that we have no purely deductive argument for a given conclusion is not the same as saying that we have no justification for believing it (and if one thinks that it is, then one is also committed to the belief that we have no justification for believing, based on previous experience, that the problem of induction also exists in this very moment; more on this below).

Applying Hume’s Claim to Itself

According to Hume’s quote above, the belief that we can make generalizations based on particular instances can never be “satisfied by our reason”. The problem, however, is that, according to our modern understanding of the world in physical terms, all we ever can generalize from, including when we make deductive inferences, is particular instances — particular spatiotemporally located states and processes found in our brains (equivalently, one could also say that all we can ever generalize from, as knowing subjects, are particular states of our own minds).

Thus, Hume’s statement that we can never prove such generalizations must also apply to itself, as it is itself a general claim based on a particular instance of reasoning taking place in Hume’s head in a particular place and time (indeed, Hume’s claim would appear to pertain to all generalizations).

So what justification could Hume possibly provide for this general claim of his? According to the claim itself, no proof can be given for it. Indeed, if Hume could provide a proof for his claim that it is impossible to find a proof for the validity of generalizations based on particular instances, then he would have falsified his own claim, as such a proof is the very thing that the claim holds not to exist. And such an alleged proof would thereby also undermine itself, as what it supposedly shows is its own non-existence.

This demonstrates that Hume’s claim is unprovable. That is, based on this particular instance of reasoning, we can draw the general conclusion that we will never be able to provide a proof for Hume’s claim. And thereby we have in fact proven Hume’s claim wrong, as we have thus provided a proof for a general claim that also pertains to that which lies beyond the reach of our discovery. Nowhere, neither in the realm of the discovered nor the undiscovered, can a proof for Hume’s claim be found.

So we clearly can prove some general claims about that which lies beyond the reach of our experience based on particular instances (of processes in our brains, say), and hence the claim that we cannot is simply wrong.


Yet one may object that this conclusion does not contradict what Hume in fact meant when he claimed that we cannot prove the validity of generalizations based on particular instances, since what he meant was rather that we cannot prove the validity of inductive generalizations such as “we have observed X so far, hence X will also be the case in the next instance/in general” — i.e. generalizations whose generality seems impossible to prove.

The problem, however, is that we can also turn this claim on itself, and indeed turn the problem of induction altogether on itself, as we did in a parenthetical statement above: the mere fact that we have not been able to prove the validity of any inductive claims of this sort so far does not imply that such a proof can never be found. In particular, the claim that we cannot prove the validity of any such inductive claim that seems impossible to prove is itself an inductive claim whose generality seems impossible to prove (i.e. it seems to rest on the argument: “we have not been able to prove the validity of any inductive claim of this nature so far, and hence we cannot[/we will never be able to] prove the validity of such a claim”).

And if we accept that this claim, the very claim that gives rise to the problem of induction, is itself a plausible claim that we have good reason to accept in general (or at least just good reason to believe that it will apply in the next moment), then we indeed do believe that we can have good reason to draw (at least some plausible) non-deductive generalizations based on particular instances, which is the very thing Hume’s argument is often believed to cast doubt upon. In other words, in order to even believe that there is a problem of induction in the first place, one must already assume that which this problem is supposed to question and be a problem for.

Indeed, one can make an argument along these lines that it is in fact impossible to give a coherent argument against (the overwhelming plausibility of at least some degree of) the uniformity of nature. For in order to even state an argument or doubt against it, one is bound to rely thoroughly on the very thing one is trying to question. For instance, that words will still mean the same in the next moment as they did in the previous one; that the argument one thought of in the previous moment still applies in the next one; that the problem one was trying to address in the previous moment still exists in the next; etc.

Thus, it actually seems impossible to reasonably, indeed even coherently, doubt that the world has at least some degree of uniformity, which itself seems to constitute a good argument and reason for believing in such uniformity. After all, that something cannot reasonably be doubted, or indeed doubted at all, usually seems a more than satisfying standard for believing it.

So to reiterate: If one thinks we have good reason to take the problem of induction seriously, or indeed just to believe that this problem still exists in this moment (since it has in previous ones), then one also thinks that we do have good reason to make (at least some plausible) non-deductive generalizations about that which lies “beyond the reach of our discovery” based on particular instances. In other words, if one takes the problem of induction seriously, then one does not take the problem of induction seriously at all.


How to then draw the most plausible inferences about that which “lies beyond the reach of our discovery” is, of course, far from trivial. Yet we should be clear that this is a separate matter entirely from whether we can draw such plausible inferences at all. And as I have attempted to argue here, we have absolutely no reason to think that we cannot, and good reason to think that we can.

Induction Is All We Got

In this piece I shall defend what may appear an unusual thesis, namely that all reasoning is ultimately based on induction, and hence that induction is the only way in which we ever know anything. By induction, I here mean what seems right in light of the doubtable data/experience we have accumulated so far. In everything from logic and mathematics to philosophy and psychology, this is invariably how we evaluate what is true. Or so I shall argue.

How can we be sure that the patterns we have reliably observed in the world so far will also exist in other times or places? How can we justify the assumed uniformity of the world that induction seems to rest upon? How can we trust induction when it cannot be deductively justified? This is the problem of induction in a nutshell.

What is interesting, however, and seemingly universally missed, is that exactly the same problem is staring us in the face when it comes to deduction. Logical deductions are also part of the world, and hence to assume that they will be valid in all times and in all realms is therefore also to assume that the world is uniform in certain ways. It is the exact same assumption, so why is it considered problematic in the case of induction but not in the case of deduction? What is the source of this discrimination?

The answer, I think, is that it just seems true that deduction is universal, and that the opposite claim — that logic is not universal — seems to make no sense. I certainly share this impression, but this does not render deduction wholly undoubtable. We may reasonably have confidence in the statement that logical deductions are universal, but we should be clear that the basis of this belief is itself merely that it seems reasonable to suppose this given that our minds apparently cannot make sense of anything else. More than that, we should also be clear that we then in fact do accept the uniformity of the world (or perhaps assign a high probability to this claim being true), and that we do it on the basis that it just seems reasonable.

Another aspect of the problem of induction is that induction merely is assumed to be valid, and that attempts at justifying it always seem circular. Yet again, how does deduction compare? How do we justify deduction? With deductive arguments? That would be circular as well. With brute assumptions? If so, why is it more problematic to assume the validity of induction?

There really is no fundamental distinction. We accept both induction and deduction because they seem right. Deductions seem obviously reasonable and valid while inductive inferences seem fairly reasonable and probably valid. The only difference, it seems, is the degree of obviousness, a difference I shall try to explain below.

Beliefs: All in Memory

One way to realize the conclusion sketched out above is by recalling the fact that all our beliefs reside in memory. And we know that 1) our memory consists of information we have gathered over time, and 2) our memories can be unreliable. There is nothing logically problematic about this; indeed, this is common knowledge. Yet it implies something rather significant, namely that all our beliefs, including those about logic, are doubtable, and that all our beliefs are a matter of what seems right in light of the doubtable data/experience we have accumulated so far.

This applies to all knowledge, whether inductively or deductively inferred (as we shall see, the latter is a subset of the former). Mathematical proofs, for instance, are often claimed to be certain knowledge, yet our knowledge of mathematical proofs is also contained in memory. And since all mathematical proofs we know of are stored in memory, and since memory is fallible, it follows that our belief in any mathematical proof we hold to be valid is, in fact, fallible.

The idea that mathematical knowledge is certain and rests only on deduction is indeed ridiculous. Take for instance the proof of Fermat’s Last Theorem: a small fraction of professional mathematicians actually fully understand this proof, yet in my experience, virtually all mathematicians will say that we know that Fermat’s Last Theorem is true. And this is probably a highly reasonable belief, but let us be clear about how we know it: by trusting the expertise of other mathematicians. And such trust is transparently based on induction; it is not based on deduction. More than that, we know, inductively, that this inductively based trust is fallible.

A famous example would be Alfred Kempe’s proof of the four-color theorem, presented in 1879, which was widely accepted until it was shown to be incorrect in 1890. Another example is Gauss’ proof of the fundamental theorem of algebra, a proof Gauss himself obviously held to be valid, as did many other mathematicians, yet it was not completed until more than a hundred years after Gauss first published it.

So our mathematical knowledge clearly relies strongly on induction, in that we trust others. Indeed, I would argue that, in practice, a majority of the mathematical knowledge any mathematician knows is known based on such trust in others rather than their own deductions. Yet to think that we rely on induction merely when it comes to trusting others in the pursuit of what we call deductive knowledge is to miss the point. For the point is that this applies to all mathematical knowledge, including when we have made all the deductions ourselves. There is no fundamental distinction between when others have made the deductions and when we have made them ourselves. In both cases, we trust conclusions made by fallible minds, stored in a fallible memory.

This of course isn’t to say that such trust is unreasonable, yet the nature of this trust should not be missed: it rests on induction. There is no deductive argument that proves our memory to be reliable. Rather, we merely assume the reliability of memory, and 1) this is an assumption that we cannot not make, 2) it is an assumption that all deduction, indeed all knowledge in general, rests upon, and 3), to repeat the point made above, this assumption rests on induction.

Let me explain and justify all these claims in turn. To start with 3), to assume that our memories in this present moment are valid rests on the assumption that the information we have stored in memory earlier still applies. This projected extension of the limited information we know is the core of induction. As for 2), it is trivial that all knowledge, including that derived from deduction, rests on the reliability of memory, since that is where all our knowledge is stored. So to say that we know anything about anything is to assume the validity of our memory — or at least the validity of some aspects of it; more on this below. Lastly, 1), the assumption that we can trust our memory is an assumption we cannot not make because our memory is the position from which we see the world. To even doubt this assumption requires trusting it, since one must then at least trust that one doubts.

Yet we know our memory to be profoundly unreliable, don’t we?”

Yes, but it is not entirely so, and that is the point. For in order to even discover that our memory is not (entirely) reliable, we must assume that at least some aspects of our memory are — at the very least those aspects of it that hint that our memory is not entirely reliable. In other words, the discovery of the imperfect reliability of memory rests on its partial reliability.

So believing that we cannot trust any aspect of our own memory is nothing less than logically impossible, since such a belief — indeed any belief — itself resides in memory, and thus rests on its (at least partial) reliability. And given this status of logical impossibility, the belief that we cannot trust any aspect of our memory must be considered false with at least the same certainty that we place in other logical conclusions. Indeed, if possible, it should be granted even higher status, since all other beliefs, including purely logical ones, rest upon its falsity, namely that we can trust (at least some aspects of) our memory. That’s right: all deductive knowledge rests on the reliability of memory, and this reliability rests on the validity of induction [again, this was 3) above]. Conclusion: Deductive knowledge rests on the validity of induction.

Indeed, the reason we trust deduction is ultimately inductive. For deductions are also, I would argue, experiments that we run in our heads, albeit experiments that reliably produce the same result. We therefore inductively conclude that they will keep on doing the same. What we usually consider matters of induction — for instance, we have observed a thousand white swans; should we expect the next swan to be white given all that we know about the world, including the fact that there are other birds who are not white? — is just different in that we are in a realm where our information seems a lot more incomplete. It is ultimately of the same form.

This also explains the difference in the status of certainty we ascribe to deduction and induction mentioned above: deduction seems obviously reasonable and valid because the experiment goes right every time, as far as we can tell, while (what we usually call) induction seems fairly reasonable and probably valid because it works well most of the time.

So the reason, I believe, that Hume found deduction more valid than induction, and found induction so much more problematic, was, ironically, because induction recommends the former more strongly. Hume’s objection to induction is really an adventure in self-contradiction — in many ways. For instance, the great man claimed, based on his own brain’s reasoning, that a universal rule cannot be derived from particular instances, yet what is this if not itself a universal rule derived from particular instances (of reasoning in his brain)? What is this if not a glaring self-contradiction?

Try as you might, in the realm of belief, there simply is no denying the validity of induction. Again, in order to even express doubts about the validity of induction, one must inescapably rest on what one is trying to doubt, as one then inductively assumes that doubt is a meaningful concept in this moment (it has been so far), that the others whom one expresses one’s doubts to will understand a word of what one says (they have so far), that there still is a problem of induction (it seems there has been so far), etc. Indeed, all beliefs rest on induction, as they rest on the assumption that the justification we have acquired for them in the past still applies in the present, including belief in notions of past, present, and future in the first place, not to mention (tacit) belief in there being such a thing as logic, truth, and falsehood — the ideas that constitute the entire framework in which discussions about induction occur.

So what justifies induction, then?”

Nothing. In order to even enter the realm of trying to justify something, we have already accepted induction. In asking for a justification for induction, we ask from a position of unacknowledged acceptance of it. Indeed, what justifies the belief that there is a need to justify induction — a belief that itself rests on induction? Nothing. If we believe anything at all, we are already way past the point of accepting induction, knowingly or not. So to the extent we admit of having any beliefs at all, we admit of the validity of induction. We are fundamentally confused about where in our hierarchy of beliefs induction enters the picture. The answer is: underneath it all.

Knowing Good from Bad Induction

To say that reliance on induction is inevitable is obviously not to say that all inductive inferences are valid. So how do we know valid inductive inferences from invalid ones? Via induction, of course.

In a nutshell, we (ideally) assess the truth of a statement in light of all the information we have in our memory — the totality of what we know. This is all we got, and hence all we ever can evaluate truth claims based on. The more the doubtable data points we have accumulated point our beliefs in a certain direction, the stronger those beliefs are, or at least should be.

For example, the claim that the sun will rise tomorrow is a claim that we believe because it fits with, indeed is predicted by, everything we know, from the totality of humanity’s knowledge of physics and astronomy to our everyday experience.

In the same way, we can deem inductive inferences false. For instance, the claim that the sun will always keep rising because it has done that so far is obviously not true, and the way we know this is again via induction: we know of underlying physical principles that “govern” the physical macro patterns that are the dynamics of stars and planets, and these principles, along with astronomical observations of stars elsewhere, imply that the age of our solar system will indeed be finite. That is what all the data points to.

The commonly cited examples of “hard problems” for our (inevitably) inductive reasoning are all problems that arise from paying attention to a too narrow channel of information. For instance, when we say that every swan we have ever seen is white, and therefore all swans must be white, this is simply a bad inference that fails to keep other relevant facts in view, such as the size of our sample, the size of the Earth, and the fact that there are other birds who have a different color, a fact that is relevant when we keep in mind the additional fact that there is a high degree of similarity in patterns across species.

But what if we did not know about these additional facts? Then the inference seems reasonable.”

First, it should be noted that if we were in that position, we would be ignorant to a degree that is hard for us to imagine as creatures who know a lot. Second, if we were in such a position of knowing virtually nothing, we indeed should be very careful to make general conclusions about the world with confidence. If you have seen a thousand swans, and they have all been white, it seems reasonable to expect that the next one you see will be white as well, but it by no means implies that all swans are.

But couldn’t our inductive reasoning be wrong, even when we know a lot and we consider the totality of what we know?”

This is possible, yet, as we know inductively, e.g. from statistics, the more we know, the less likely such mistakes are. It is also worth noting how we know of the possibility of the fallibility of inductive inferences in the first place, namely via induction. We know that apparently solid patterns can break because we have witnessed it before. Nations that seemed strong suddenly fell, people who were right about many things were suddenly wrong, proofs that seemed valid were shown not to be, etc. We have observed this meta pattern of patterns sometimes breaking when we don’t expect it, which has taught us, inductively, to be more open-minded about the possibility of the breaking of even apparently solid patterns. It is always induction that teaches us epistemic modesty.

So it is due to inductive reasoning, not in spite of it, that we seem to have some reason to be agnostic concerning the generality of patterns we consider general, such as whether the cosmos looks the same everywhere across time and space — a question that is currently debated among physicists and cosmologists. What we can say here seems much like what we could say as the ignorant swan observers we imagined ourselves to be above: it seems reasonable that the time and space in the proximity of that which we have observed to unfold in certain law-like ways will also unfold in such ways, but we cannot confidently claim that this applies to all time and space.

The Source of the Problem: A Narrow and Confused View of Knowledge

As mentioned above, a narrow focus on certain data and beliefs about the world, as opposed to a focus on the totality of what we know, is the source of many problems in epistemology, including Goodman’s new riddle of induction and the traditional problem of induction itself. In the case of Goodman’s new riddle of induction, the problem is, in a nutshell, that we have no reason to believe that properties such as grue and bleen exist in light of all that we know about physics, as their existence would essentially require a change in the laws of physics that we have no reason to believe possible. So it is not the case that these two hypothetical properties constitute a deep problem for induction; the suggestion that things could be grue or bleen merely constitutes an extremely unlikely hypothesis about the world.

As for the problem of induction itself, a narrow focus is also to blame. Hume made the following claim: “That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.” Yet that this proposition “implies no more contradiction” is simply wrong, since it contradicts pretty much everything we know in fields such as astronomy and physics. And if you can contradict all this, why not also contradict history and claim that there never was a guy named David Hume, and that nobody has ever raised any so-called problem of induction? After all, this is certainly “no less intelligible” or plausible than the claim that the sun will not rise tomorrow. Or to take a more traditional inductive problem: why believe that there is any problem of induction in this moment or the next one just because it seems that there has been in the past? Indeed, why not contradict logical conclusions themselves?

This is surely what Hume means: the claim that the sun will not rise tomorrow seems to imply no logical contradiction, yet this dichotomy between logical and physical knowledge is, I would argue, ultimately misguided. First, in ontological terms, there is no evidence for the existence of some separate logico-mathematical world apart from the physical one — mathematical truths are found in and by the human mind, and given that the human mind is physical, it follows that mathematical truths are found in and by the physical. Second, as mentioned above, in epistemological terms, both what we consider mathematical and physical forms of knowledge ultimately share the same inductive basis — they are stored in our memory based on what we have experienced — which is yet another reason not to strongly privilege one over another, as Hume does. In sum, there is no justification for Hume’s narrow focus on, and privileging of, deductive reasoning and knowledge — his belief that only (what we categorize as) logical truths are valid. Again, deductively based beliefs, like all other beliefs, also rest on induction in the first place.

How We Know Things: It Just Seems That Way

How do we know that we are conscious, or that two plus two equals four? The answer, I would argue, is simply that it appears clear from our experience that this is the case. We ultimately have no deeper justification than this.

And this answer actually does not change when we ask more complicated questions, such as how we know that the Earth is round, or what the name of the current president of the United States is. We know because of experiences that have shaped, and in significant ways are now part of, our present experience from which it just seems obvious what the answer is. We may be able to express a long chain of reasons that compel us to hold the belief we hold, yet at the bottom of this elaborate chain, all we ultimately have is a set of conscious impressions of belief. Or doubt, for that matter, if we don’t happen to know the answer, but the basic mechanics are the same: we weigh our experience and read off from it what our state of belief — or doubt — is; itself a fact about the world.

Every chain of explanations must end somewhere, and, when it comes to our knowledge, the rock bottom of this chain is found in our direct conscious sensations. Ultimately, we do not have a deeper justification for what we know than this: it seems that way from our conscious impressions. This form of foundationalism is, I submit, the solution to the so-called Münschhausen trilemma concerning how we justify what we know.

This is not to say that we cannot question and correct our impressions. We clearly can, as the correction of illusions and biases exemplify, yet our knowledge of such corrections is itself a matter of conscious impressions, for instance impressions that inform us about statistics, which help us correct wrong ones. The ultimate justification for our beliefs is still our experience. And this is indeed how we improve our knowledge of the world: new impressions help update and correct old ones, which in turn makes us form better ones, i.e. impressions that represent the world more accurately.

That our knowledge at bottom rests on experience is also not to say that our knowledge rests on a basis of mere assumptions. A good analogy, I believe, is our knowledge of fundamental physical constants, which are also in some sense primitive, in that they are measured rather than derived from something else. We have no deeper justification for believing what the values of these constants are than our measurements, yet this is clearly distinct from merely assuming these values. Similarly, I would argue that we observe — “measure”, if you will — the fact that we are conscious and that two plus two is four; we do not merely assume this (there is clearly a difference: to arbitrarily assume your friend is in the same room as you is quite distinct from seeing that your friend is in the same room as you). And as in the case of the measurement of fundamental physical constants, direct measurements in consciousness can of course be erroneous, yet when we consistently measure the same result time and time again by running the same experiment, we do seem reasonably justified — inductively, as always — in believing the validity of the measurement.

That our conscious impressions are what our beliefs ultimately rest upon may seem somewhat weak and unsatisfying, yet only if we fail to keep in mind that conscious impressions are in fact all we ever deal in when it comes to our knowledge. This includes the sense that conscious impressions constitute a poor foundation for knowledge: this sense is itself just another appearance in consciousness, resting on the exact foundation it purportedly doubts. And if a statement like “I believe this because it seems that way in light of what I experience” sounds like a weak foundation for knowledge, this, I believe, is mainly because we usually only use this kind of language when it comes to matters we are uncertain about, such as immediate unexamined impressions. In truth, however, this “it is what seems true in light of my experience” is in fact what we always do, regardless of our degree of certainty. One’s knowledge of textbook information is also “just” another conscious impression.

Phenomenological Positivism: Knowledge Built from a Phenomenological Palette

What we do when we model the world is to represent its features with the different colors of the palette of consciousness. Indeed, this is all we ever can do: consciousness is all we ever know, and hence its colors are all we ever can model and represent the world with at the level of our knowledge.

One can fairly consider this account of knowledge a positivist one, although one that is of a distinctly phenomenological and common sensical sort. For given that consciousness is all we ever know, it is obvious that all facts we know are known via a composition of the various states of consciousness available to us, including the set of facts about the “external world” that can be detected and represented with our conscious minds (and things that fall outside of what we can detect with our conscious minds are obviously the things we cannot know).

So although science is often considered beyond unification, and that universal features shared by all sciences seem to have been deemed non-existent by common consensus, it remains trivially true, to me at least, that all forms of knowledge, whether we deem them “scientific” or not, are known in consciousness, and hence that all our knowledge is at least united by this common feature. In a nutshell, our knowledge of the world is a matter of phenomenological models that appear consistent with phenomenologically observed data. And, again, this “appearing consistent with” — or “seeming right” in light of — all the data is, as a mater of justification, ultimately all we have. This, I submit, does not only apply to science in its usual narrow conception, but to reason in general. For instance, this is also how we (ideally) assess the plausibility of different views in, say, ethics and epistemology: by weighing the data, including arguments and counter-arguments, and assessing what seems reasonable in light of it all (and here it is worth being mindful of the fact that genes seem to play a significant role when it comes to what “seems reasonable”, also in the realm of ethics and politics, and hence to be intensely skeptical of the “immediate seemings” of one’s crude intuitions, and to probe them deeper).

In this way, this account of knowledge and reason actually breaks down the usual empiricism-rationalism dichotomy: all processes of thought and reasoning are also phenomenally observed sensations, and hence not something different from “observations.” They are indeed themselves impressions — more doubtable data — that influence our view and assessment of the world. Rationalism, as in logical reasoning, is just another mode of empiricism and experiment, one that has strengths and weaknesses like all other “experimental devices”.

It is worth noting that this account of our knowledge, and reason more generally, does not amount to mere Bayesianism in any usual sense. For while Bayesian updating surely shares this general feature of being a matter of updating and estimating degrees of certainty based on all available evidence, and while much of our own updating is overtly Bayesian — for instance, many of us have made updates in our views based on formal Bayesian calculations — there is much more to our knowledge and our updating of our beliefs than mere formal calculations with numerical probabilities. Not all available evidence is represented, or even representable, as numerical probabilities; for a person who does not know what it is like to experience, say, sounds and sights, no amount of formal Bayesian calculations is going to shed light on the matter. One must experience these things to know what they are like. Bayesian updating is merely the formal special case of the more general inductive method of estimating what seems right in light of the doubtable data/experience we have accumulated so far.

Do We Have Faith in Induction/Science?

A notion one often hears from religious scholars is that faith in religious claims is no less reasonable than belief in the facts we know from the sciences, since the latter ultimately rest on faith as well: they rest on faith in reason. Yet is this true? In a nutshell, no.

Science is the process of learning about the world by observing it. Therefore, one could argue that science rests on the assumption that we can learn about the world by observing it, which is in fact functionally equivalent to the assumption that induction is valid, since learning about the world by observing it requires that patterns that existed in the past still exist today and in the future — the core of induction.

Yet one need not even make this assumption explicitly, since the assumption that we can learn about the world by observing it is one that we cannot not make. In order to even express the belief that one cannot learn from experience of the world, one has already learned from such experience, the experience of one’s own belief. (This inevitability makes it just like the assumption that at least some aspects of memory can be trusted, which is in fact also an equivalent proposition: that we can learn about the world by observing it requires that at least some aspects of our memory is reliable, and for our memory to contain reliable information about the world, it must be possible to learn about the world by observing it.)

Thus, we all implicitly “assume” that we can learn about the world by observing it, whether we are religious or not, and hence making this inescapable “assumption” cannot meaningfully be called a leap of faith. Rather, it is an inescapable fact (one that all other facts rest upon), as there is no intelligible alternative (indeed, the very possibility of intelligibility of any kind rests on learning from observation in the first place, as claims cannot be deemed (un)intelligible if they cannot be learned in the first place). This makes it wholly unlike actual leaps of faith, i.e. believing in things, such as supernatural events, without supporting evidence. The latter is by no means inescapable.

Indeed, claims about some things being a matter of faith only make sense in a context where we have already made “the leap of faith” of accepting that we can learn about the world by observing it, since whether a claim rests on faith is a matter of whether there exists evidence to support it. And all evaluations of evidence must take place in a realm where we have already assumed the relevance of evidence for propositions about the world — i.e. already made the inevitable “assumption” whose status was in question. In other words, in order to assess whether or not something is a matter of faith, we must “assume” the relevance of evidence in the first place; we must accept that we should go with what seems right in light of the doubtable data/experience we have accumulated so far.

One may object that science rests on much more specific assumptions than merely the possibility of learning about the world by observing it, yet, ideally, this should not be the case. For while it is true that specific methodologies have emerged in the sciences over time, the process of science most generally — that is, learning about the world by observing it — is not committed to any specific methodology in principle, which makes all specific methodologies open for revision. If certain methods are shown to be seriously flawed, as has happened before, these methods should be discarded or updated. And this is indeed how the methods we see employed in the sciences today have developed. Placebo-controlled studies and double blind experiments were not assumed by faith to be the way to “do science” from the outset. Rather, these and other sensible methods of discovery were themselves discovered over years of trial-and-error.

Thus, what works best, both when it comes to theories and methods, is itself to be settled with observation and examination, not faith. Based on the fundamental principle of learning from observation, science continually refines its own method. In this way, the process of observing and learning about the world is a self-correcting and self-optimizing one.

Doubting the Apparently Undoubtable

As noted earlier, inductive reasoning has shown us that we have good reason to maintain humility about our beliefs. We know that our memory is fallible. As mentioned above, even mathematical proofs held to be valid by many have turned out to be wrong, and this risk of fallibility not only pertains to the logical deductions made by others, but also to those made by ourselves — the appearance that a logical deduction is valid can turn out to be wrong upon closer examination. It has happened before.

So it seems that we should maintain at least some degree of doubt even when it comes to logical deductions that we seem to have reason to be completely certain of, which is not to say that it is reasonable to have more than a negligible degree of such doubt in most cases.

Yet the above-mentioned doubts merely amount to epistemological doubts, doubts about whether our faculties of reasoning accurately track the deeper patterns of the world. We could also have doubts of a deeper ontological nature, namely about the stability of those patterns themselves. For instance, will the laws of physics as we know them apply tomorrow? What about logico-mathematical truths?

Do such questions even make sense? After all, don’t questions concerning what happens tomorrow, and hence rest on the concept of time, already presuppose some basic laws of physics, or at least some elements from the physical framework as we know it? And doesn’t the meaningfulness of doubts concerning whether our logical framework will at all apply tomorrow also itself rest on the validity of that very framework, e.g. that things are either the case or not the case? After all, all talk of whether something applies or not — is true or not — already takes place in the realm of, and therefore presupposes the sensibility of, logical thought. So what does it even mean to say that this framework might no longer apply when the very coherence of “applying” rests on this framework? It seems self-refuting.

It does. Yet even so, we do seem to have reason to maintain at least some degree of humility about these propositions, one reason being the aforementioned “epistemological doubt” — we know our memory is not entirely reliable, and hence we should admit of the possibility that deductions of the sort made above have a small risk of being wrong. Indeed, this argument for the sensibility of (at least a small amount of) doubt seems to pertain to all arguments, including itself (and also the most undoubtable of ethical positions we may hold).

Second, certain drastic changes, such as changes in certain otherwise lawful physical patterns, do not seem inconceivable; indeed, some cosmological theories predict such changes. Therefore, the claim that at least some apparently solid facts about the world may suddenly change cannot be ruled out deductively, it seems. Might the very fabric of existence suddenly change in radically unexpected ways, thereby perhaps altering physical and mathematical truths as we know them? (Again, on a physicalist view of the world, physics and mathematics cannot be separated, which means that what we may call the uniformity of mathematics depends on at least some degree of uniformity of [what we consider] physics). It seems extremely unlikely, but we cannot exclude it with total certainty.

Lastly, it also seems conceivable that we could have new experiences — on a sufficiently exotic drug, for instance — that would suddenly make the so far inconceivable seem conceivable, and thereby make apparently valid deductions and brute facts appear invalid and untrue. Again, the only justification we have for believing what we believe is, ultimately, that “it just seems true.” And while it may be inconceivable to imagine, say, that mathematical truths could suddenly change, it does not, strangely enough, seem inconceivable that such an apparently inconceivable claim could seem conceivable in a radically different state of mind. And if it can seem right in another state of mind, how can we maintain absolute certainty that that state of mind is more wrong than our own present one is? It seems we can’t.

In sum, it seems that even when it comes to the most outrageous of claims, claims we cannot even make any sense of, some small degree of uncertainty about their status still seems in place, although the appropriate degree may be very small indeed. Everything can reasonably be doubted to some degree. Or so it seems.

[A small side note: In terms of practical implications, this small window of doubt might help one soften up painful certainties, such as certainty in fatalism. For while it might be tempting to some to think about the world as being an unalterable multi-dimensional structure that we cannot change in any strong sense, one must admit that this view could in fact be wrong, and hence that trying to change the world for the better indeed might have some chance of making a difference even in a very strong sense. Either way, it seems like one does not lose anything by trying one’s best.]

Inconsistent Skepticism

Our conscious experience seems to represent a world “out there” that is independent of our own minds. But how do we know this representation is at all accurate? How do we know the truth is not rather some well-known skeptical conjecture — for instance, that our experience is all a dream or a computer simulation?

I think there is a lot to be said against skepticism of this sort, the most important one being that it is inconsistent. Knowledge of dreams and simulations is itself found in our experience, and hence to consistently doubt the validity of our experience requires us to doubt the validity — i.e. the meaningfulness and sensibility — of these notions themselves. Yet in our entertainment of skepticism of this sort, these notions themselves are somehow exempt from skepticism. They stand beyond scrutiny, while virtually all other appearances we know of, and all other beliefs we hold, do not.

What can justify such inconsistent skepticism? Nothing, as far as I can see, especially given that claims of the sort that all we experience could be a dream or a computer simulation seem extremely dubious to say the least. Take the claim that our entire experience is a dream. Does anything we know of actually suggest this in the slightest? Not to my knowledge. The state of our consciousness in our dreams is radically different from our waking state. Indeed, within a dream it is even possible to realize that one is dreaming, and to explore one’s consciousness in that state, as many of us have tried; something similar never happens in our waking state. The only thing that remotely hints that our experience could be a dream is an argument from analogy: Given that our experiences in dreams can seem to convincingly represent the world, yet still turn out to be mere dreams, could our waking state that seems to convincingly represent the world not be a mere dream too?

If dreams were anything like our waking state, this would indeed seem reasonable. Yet the truth is that they are not.

This “the appearance is different” fact may seem to say precious little, yet only if we miss the significance of differences in appearances. By analogy, imagine that you are on holiday in Istanbul. You remember planning the journey, traveling there, being there for the past five days, and presently you are watching the Sultan Ahmed Mosque while feeling the unbearable summer heat. Now, how do you know that you are not, in fact, in Oslo? Well, just about every single appearance in your consciousness suggests that you are not, and hence you are not in much doubt. And reasonably so.

Yet is this really analogous to the difference in appearance between our dreaming and waking state? Not quite, as I would argue that this analogy actually fails to do justice to the actual difference between our waking and dreaming state, a difference that is far greater than the difference between a waking experience of Istanbul and Oslo respectively. Hence, I would argue that we have no more reason to suspect that our present experience is a dream anymore than we have reason to suspect that we, say, live in a completely different city than we thought. Yes, the world, including the basis of our experience, may well turn out to be very different from what we expect in many ways. Yet the specific claim that our experience of the world is a dream — something that takes place in the brain of a sleeping person — is, I would argue, extraordinarily implausible in light of all that we know, especially the enormous difference between the character of our waking and dreaming state.

Even stronger skepticism seems justified in the case of the claim that all we experience is a computer simulation, one reason being that we simply have no evidence that computer simulations can mediate conscious minds like our own in the first place — at least no more evidence than we have for believing that, say, tomatoes can (indeed, tomatoes are in many ways far more similar to human brains in physical terms than computers are). Another good reason to be intensely skeptical is that so-called ancestor simulations are in fact impossible.

A similar degree of skepticism seems apt in the case of the claim that all we experience is the result of a brain in a vat. According to what we know from fields such as physics, chemistry and biology, there is, as Daniel Dennett shows in Consciousness Explained, no way to produce an experience like ours by stimulating a brain in a vat. And if we dismiss such knowledge, we might as well dismiss our belief in the existence of brains in the first place — itself a belief about physics and biology that we do not seem justified in granting a more privileged status than we do other solid facts found in the canons of physics and biology.

And since we are dealing with various skeptical hypotheses, it seems worth pointing out that skepticism about the existence of other minds is on no firmer ground, as it indeed has the exact same epistemic status as doubting the existence of brains does. The existence of brains is only known through our own conscious experience, an experience that, according to what is known in that experience itself, is mediated by a physical brain. Based on this, we draw an inferential arrow that connects our experience to physical brains. We go from experience to physical brain. Therefore, drawing an arrow from brain to experience — whether one’s own or that of others — which is really just to draw the exact same arrow in the opposite direction, is no more problematic. Conclusively, doubting the existence of other minds is really no more reasonable than doubting the existence of one’s own brain.

One may argue that there is a difference when we are talking about different brains from our own, yet one could say the same about one’s future or past brain, which is also different from one’s present one. If one believes that one’s own future brain will be conscious — a brain that is similar yet still different from one’s present one — then how can one maintain that the brains of other beings that are also similar, yet different from one’s present brain are not conscious as well? Similarly, if one believes that one’s ever-changing brain has mediated conscious states in the past, why should the different brain states of others not mediate consciousness as well? To believe they do not is simply inconsistent.

The problem with skeptical conjectures such as the dream and the simulation hypothesis is, again, that they hold that virtually all the appearances we know from our experience are false, yet the appearance of the possibility that the basis of our experience is something radically different from what we thought — yet still something that we know of from our experience, such as the notion of a dream or a simulation — is not subjected to such doubt at all (in spite of an absence of good reasons for believing in such possibilities in the first place). In other words, these conjectures rest on arbitrarily constrained skepticism.

More than that, these skeptical hypotheses also seem to undermine themselves. For if we accept the premise that our experience indeed is a simulation or a dream, what reason do we have for believing that the worldview we are able to draw from it, including any conclusion about dreams and simulations, has any validity beyond our own simulation or dream? If we are living in a dream or a simulation, it seems that what we think we can say with any certainty about the world, including about dreams and simulations, is likely to be wrong to an unimaginable degree, since it is all based on pure dream or simulation itself. Conclusively, accepting any of these conjectures seems to force us to doubt them strongly, even to make it difficult to make sense of them. And being self-undermining is not a virtue of a conjecture.

Again, what we do when we assess the truth of a proposition is, ideally, to judge its plausibility in light of the totality of what we know. And this is exactly what we fail to do when we deem skeptical conjectures of this sort likely. We go with peculiar arguments, propositions, and concepts, and then doubt everything else, thereby ignoring that the meaning, even the coherence, of these arguments and concepts rest, in subtle and not so subtle ways, on all this other knowledge that they supposedly imply we should doubt, thereby inadvertently destroying their own foundations.

Keeping the totality of our knowledge in view and applying our skepticism consistently leads us, I maintain, to a relatively common sense view of the world, at least when it comes to the basics of the basis of our experience. What we know about the world hints that our experience is mediated by a biological brain just as strongly as our experience hints that the Earth is round; nothing really suggests it is not. In my view, we have no good reason to believe that what we experience is, or even could be, a dream or a simulation, while a very great deal — including consistent thinking based on what we know — strongly suggests it is not.


This post was originally published at my old blog:

Blog at

Up ↑