Research vs. non-research work to improve the world: In defense of more research and reflection

When trying to improve the world, we can either pursue direct interventions, such as directly helping beings in need and doing activism on their behalf, or we can pursue research on how we can best improve the world, as well as on what improving the world even means in the first place.

Of course, the distinction between direct work and research is not a sharp one. We can, after all, learn a lot about the “how” question by pursuing direct interventions, testing out what works and what does not. Conversely, research publications can effectively function as activism, and may thereby help bring about certain outcomes quite directly, even when such publications do not deliberately try to do either.

But despite these complications, we can still meaningfully distinguish more or less research-oriented efforts to improve the world. My aim here is to defend more research-oriented efforts, and to highlight certain factors that may lead us to underinvest in research and reflection. (Note that I here use the term “research” to cover more than just original research, as it also covers efforts to learn about existing research.)


Contents

  1. Some examples
    1. I. Cause Prioritization
    2. II. Effective Interventions
    3. III. Core Values
  2. The steelman case for “doing”
    1. We can learn a lot by acting
    2. Direct action can motivate people to keep working to improve the world
    3. There are obvious problems in the world that are clearly worth addressing
    4. Certain biases plausibly prevent us from pursuing direct action
  3. The case for (more) research
    1. We can learn a lot by acting — but we are arguably most limited by research insights
      1. Objections: What about “long reflection” and the division of labor?
    2. Direct action can motivate people — but so can (the importance of) research
    3. There are obvious problems in the world that are clearly worth addressing — but research is needed to best prioritize and address them
    4. Certain biases plausibly prevent us from pursuing direct action — but there are also biases pushing us toward too much or premature action
  4. The Big Neglected Question
  5. Conclusion
  6. Acknowledgments

Some examples

Perhaps the best way to give a sense of what I am talking about is by providing a few examples.

I. Cause Prioritization

Say our aim is to reduce suffering. Which concrete aims should we then pursue? Maybe our first inclination is to work to reduce human poverty. But when confronted with the horrors of factory farming, and the much larger number of non-human animals compared to humans, we may conclude that factory farming seems the more pressing issue. However, having turned our gaze to non-human animals, we may soon realize that the scale of factory farming is small compared to the scale of wild-animal suffering, which might in turn be small compared to the potentially astronomical scale of future moral catastrophes.

With so many possible causes one could pursue, it is likely suboptimal to settle on the first one that comes to mind, or to settle on any one of them without having made a significant effort considering where one can make the greatest difference.

II. Effective Interventions

Next, say we have settled on a specific cause, such as ending factory farming. Given this aim, there is a vast range of direct interventions one could pursue, including various forms of activism, lobbying to influence legislation, or working to develop novel foods that can outcompete animal products. Yet it is likely suboptimal to pursue any of these particular interventions without first trying to figure out which of them have the best expected impact. After all, different interventions may differ greatly in terms of their cost-effectiveness, which suggests that it is reasonable to make significant investments into figuring out which interventions are best, rather than to rush into action mode (although the drive to do the latter is understandable and intuitive, given the urgency of the problem).

III. Core Values

Most fundamentally, there is the question of what matters and what is most worth prioritizing at the level of core values. Our values ultimately determine our priorities, which renders clarification of our values a uniquely important and foundational step in any systematic endeavor to improve the world.

For example, is our aim to maximize a net sum of “happiness minus suffering”, or is our aim chiefly to minimize extreme suffering? While there is significant common ground between these respective aims, there are also significant divergences between them, which can matter greatly for our priorities. The first view implies that it would be a net benefit to create a future that contains vast amounts of extreme suffering as long as that future contains a lot of happiness, while the other view would recommend the path of least extreme suffering.

In the absence of serious reflection on our values, there is a high risk that our efforts to improve the world will not only be suboptimal, but even positively harmful relative to the aims that we would endorse most strongly upon reflection. Yet efforts to clarify values are nonetheless extremely neglected — and often completely absent — in endeavors to improve the world.

The steelman case for “doing”

Before making a case for a greater focus on research, it is worth outlining some of the strongest reasons in favor of direct action (e.g. directly helping other beings and doing activism on their behalf).

We can learn a lot by acting

  • The pursuit of direct interventions is a great way to learn important lessons that may be difficult to learn by doing pure research or reflection.
  • In particular, direct action may give us practical insights that are often more in touch with reality than are the purely theoretical notions that we might come up with in intellectual isolation. And practical insights and skills often cannot be compensated for by purely intellectual insights.
  • Direct action often has clearer feedback loops, and may therefore provide a good opportunity to both develop and display useful skills.

Direct action can motivate people to keep working to improve the world

  • Research and reflection can be difficult, and it is often hard to tell whether one has made significant progress. In contrast, direct action may offer a clearer indication that one is really doing something to improve the world, and it can be easier to see when one is making progress (e.g. whether people altered their behavior in response to a given intervention, or whether a certain piece of legislation changed or not).

There are obvious problems in the world that are clearly worth addressing

  • For example, we do not need to do more research to know that factory farming is bad, and it seems reasonable to think that evidence-based interventions that significantly reduce the number of beings who suffer on factory farms will be net beneficial.
  • Likewise, it is probably beneficial to build a healthy movement of people who aim to help others in effective ways, and who reflect on and discuss what “helping others” ideally entails.

Certain biases plausibly prevent us from pursuing direct action

  • It seems likely that we have a passivity bias of sorts. After all, it is often convenient to stay in one’s intellectual armchair rather than to get one’s hands dirty with direct work that may fall outside of one’s comfort zone, such as doing street advocacy or running a political campaign.
  • There might also be an omission bias at work, whereby we judge an omission to do direct work that prevents harm less harshly than an equivalent commission of harm.

The case for (more) research

I endorse all the arguments outlined above in favor of “doing”. In particular, I think they are good arguments in favor of maintaining a strong element of direct action in our efforts to improve the world. Yet they are less compelling when it comes to establishing the stronger claim that we should focus more on direct action (on the current margin), or that direct action should represent the majority of our altruistic efforts at this point in time. I do not think any of those claims follow from the arguments above.

In general, it seems to me that altruistic endeavors tend to focus far too strongly on direct action while focusing far too little on research. This is hardly a controversial claim, at least not among aspiring effective altruists, who often point out that research on cause prioritization and on the cost-effectiveness of different interventions is important and neglected. Yet it seems to me that even effective altruists tend to underinvest in research, and to jump the gun when it comes to cause selection and direct action, and especially when it comes to the values they choose to steer by.

A helpful starting point might be to sketch out some responses to the arguments outlined in the previous section, to note why those arguments need not undermine a case for more research.

We can learn a lot by acting — but we are arguably most limited by research insights

The fact that we can learn a lot by acting, and that practical insights and skills often cannot be substituted by pure conceptual knowledge, does not rule out that our potential for beneficial impact might generally be most bottlenecked by conceptual insights.

In particular, clarifying our core values and exploring the best causes and interventions arguably represent the most foundational steps in our endeavors to improve the world, suggesting that they should — at least at the earliest stages of our altruistic endeavors — be given primary importance relative to direct action (even as direct action and the development of practical skills also deserve significant priority, perhaps even more than 20 percent of the collective resources we spend at this point in time).

The case for prioritizing direct action would be more compelling if we had a lot of research that delivered clear recommendations for direct action. But I think there is generally a glaring shortage of such research. Moreover, research on cause prioritization often reveals plausible ways in which direct altruistic actions that seem good at first sight may actually be harmful. Such potential downsides of seemingly good actions constitute a strong and neglected reason to prioritize research more — not to get perpetually stuck in research, but to at least map out the main considerations for and against various actions.

To be more specific, it seems to me that the expected value of our actions can change a lot depending on how deep our network of crucial considerations goes, so much so that adding an extra layer of crucial considerations can flip the expected value of our actions. Inconvenient as it may be, this means that our views on what constitutes the best direct actions have a high risk of being unreliable as long as we have not explored crucial considerations in depth. (Such a risk always exists, of course, yet it seems that it can at least be markedly reduced, and that our estimates can become significanctly better informed even with relatively modest research efforts.)

At the level of an individual altruist’s career, it seems warranted to spend at least one year reading about and reflecting on fundamental values, one year learning about the most important cause areas, and one year learning about optimal interventions within those cause areas (ideally in that order, although one may fruitfully explore them in parallel to some extent; and such a full year’s worth of full-time exploration could, of course, be conducted over several years). In an altruistic career spanning 40 years, this would still amount to less than ten percent of one’s work time focused on such basic exploration, and less than three percent focused on exploring values in particular.

A similar argument can be made at a collective level: if we are aiming to have a beneficial influence on the long-term future — say, the next million years — it seems warranted to spend at least a few years focused primarily on what a beneficial influence would entail (i.e. clarifying our views on normative ethics), as well as researching how we can best influence the long-term future before we proceed to spend most of our resources on direct action. And it may be even better to try to encourage more people to pursue such research, ideally creating an entire research project in which a large number of people collaborate to address these questions.

Thus, even if it is ideal to mostly focus on direct action over the entire span of humanity’s future, it seems plausible that we should focus most strongly on advancing research at this point, where relatively little research has been done, and where the explore-exploit tradeoff is likely to favor exploration quite strongly.

Objections: What about “long reflection” and the division of labor?

An objection to this line of reasoning is that heavy investment into reflection is premature, and that our main priority at this point should instead be to secure a condition of “long reflection” — a long period of time in which humanity focuses on reflection rather than action.

Yet this argument is problematic for a number of reasons. First, there are strong reasons to doubt that a condition of long reflection is feasible or even desirable, given that it would seem to require strong limits to voluntary actions that diverge from the ideal of reflection.

To think that we can choose to create a condition of long reflection may be an instance of the illusion of control. Human civilization is likely to develop according to its immediate interests, and seems unlikely to ever be steered via a common process of reflection. And even if we were to secure a condition of long reflection, there is no guarantee that humanity would ultimately be able to reach a sufficient level of agreement regarding the right path forward — after all, it is conceivable that a long reflection could go awfully wrong, and that bad values could win out due to poor execution or malevolent agents hijacking the process.

The limited feasibility of a long reflection suggests that there is no substitute for reflecting now. Failing to clarify and act on our values from this point onward carries a serious risk of pursuing a suboptimal path that we may not be able to reverse later. The resources we spend pursuing a long reflection (which is unlikely to ever occur) are resources not spent on addressing issues that might be more important and more time-sensitive, such as steering away from worst-case outcomes.

Another objection might be that there is a division of labor case favoring that only some people focus on research, while others, perhaps even most, should focus comparatively little on research. Yet while it seems trivially true that some people should focus more on research than others, this is not necessarily much of a reason against devoting more of our collective attention toward research (on the current margin), nor a reason against each altruist making a significant effort to read up on existing research.

After all, even if only a limited number of altruists should focus primarily on research, it still seems necessary that those who aim to put cutting-edge research into practice also spend time reading that research, which requires a considerable time investment. Indeed, even when one chooses to mostly defer to the judgments of other people, one will still need to make an effort to evaluate which people are most worth deferring to on different issues, followed by an effort to adequately understand what those people’s views and findings entail.

This point also applies to research on values in particular. That is, even if one prioritizes direct action over research on fundamental values, it still seems necessary to spend a significant amount of time reading up on other people’s work on fundamental values if one is to be able to make at least a somewhat qualified judgment regarding which values one will attempt to steer by.

The division of altruistic labor is thus consistent with the recommendation that every dedicated altruist should spend at least a full year reading about and reflecting on fundamental values (just as the division of “ordinary” labor is consistent with everyone spending a certain amount of time on basic education). And one can further argue that the division of altruistic labor, and specialized work on fundamental values in particular, is only fully utilized if most people spend a decent amount of time reading up on and making use of the insights provided by others.

Direct action can motivate people — but so can (the importance of) research

While research work is often challenging and difficult to be motivated to pursue, it is probably a mistake to view our motivation to do research as something that is fixed. There are likely many ways to increase our motivation to pursue research, not least by strongly internalizing the (highly counterintuitive) importance of research.

Moreover, the motivating force provided by direct action might be largely maintained as long as one includes a strong component of direct action in one’s altruistic work (by devoting, say, 25 percent of one’s resources toward direct action).

In any case, reduced individual motivation to pursue research seems unlikely to be a strong reason against devoting a greater priority to research at the level of collective resources and priorities (even if it might play a significant role in many individual cases). This is partly because the average motivation to pursue these respective endeavors seems unlikely to differ greatly — after all, many people will be more motivated to pursue research over direct action — and partly because urgent necessities are worth prioritizing and paying for even if they happen to be less than highly motivating.

By analogy, the cleaning of public toilets is also worth prioritizing and paying for, even if it may not be the most motivating pursuit for those who do it, and the same point arguably applies even more strongly in the case of the most important tasks necessary for achieving altruistic aims such as reducing extreme suffering. Moreover, the fact that altruistic research may be unusually taxing on our motivation (e.g. due to a feeling of “analysis paralysis”) is actually a reason to think that such taxing research is generally neglected and hence worth pursuing on the margin.

Finally, to the extent one finds direct action more motivating than research, this might constitute a bias in one’s prioritization efforts, even if it represents a relevant data point about one’s personal fit and comparative advantage. And the same point applies in the opposite direction: to the extent that one finds research more motivating, this might make one more biased against the importance of direct action. While personal motivation is an important factor to consider, it is still worth being mindful of the tendency to overprioritize that which we consider fun and inspiring at the expense of that which is most important in impartial terms.

There are obvious problems in the world that are clearly worth addressing — but research is needed to best prioritize and address them

Knowing that there are serious problems in the world, as well as interventions that reduce those problems, does not in itself inform us about which problems are most pressing or which interventions are most effective at addressing them. Both of these aspects — roughly, cause prioritization and estimating the effectiveness of interventions — seem best advanced by research.

A similar point applies to our core values: we cannot meaningfully pursue cause prioritization and evaluations of interventions without first having a reasonably clear view of what matters, and what would constitute a better or worse world. And clarifying our values is arguably also best done through further research rather than through direct action (even as the latter may be helpful as well).

Certain biases plausibly prevent us from pursuing direct action — but there are also biases pushing us toward too much or premature action

The putative “passivity bias” outlined above has a counterpart in the “action bias”, also known as “bias for action” — a tendency toward action even when action makes no difference or is positively harmful. A potential reason behind the action bias relates to signaling: actively doing something provides a clear signal that we are at least making an effort, and hence that we care (even if the effect might ultimately be harmful). By comparison, doing nothing might be interpreted as a sign that we do not care.

There might also be individual psychological benefits explaining the action bias, such as the satisfaction of feeling that one is “really doing something”, as well as a greater feeling of being in control. In contrast, pursuing research on difficult questions can feel unsatisfying, since progress may be relatively slow, and one may not intuitively feel like one is “really doing something”, even if learning additional research insights is in fact the best thing one can do.

Political philosopher Michael Huemer similarly argues that there is a harmful tendency toward too much action in politics. Since most people are uninformed about politics, Huemer argues that most people ought to be passive in politics, as there is otherwise a high risk that they will make things worse through ignorant choices.

Whatever one thinks of the merits of Huemer’s argument in the political context, I think one should not be too quick to dismiss a similar argument when it comes to improving the long-term future — especially considering that action bias seems to be greater when we face increased uncertainty. At the very least, it seems worth endorsing a modified version of the argument that says that we should not be eager to act before we have considered our options carefully.

Furthermore, the fact that we evolved in a condition that was highly action-oriented rather than reflection-oriented, and in which action generally had far more value for our genetic fitness than did systematic research (indeed, the latter was hardly even possible), likewise suggests that we may be inclined to underemphasize research relative to how important it is for optimal impact from an impartial perspective.

This also seems true when it comes to our altruistic drives and behaviors in particular, where we have strong inclinations toward pursuing publicly visible actions that make us appear good and helpful (Hanson, 2015; Simler & Hanson, 2018, ch. 12). In contrast, we seem to have much less of an inclination toward reflecting on our values. Indeed, it seems plausible that we generally have an inclination against questioning our instinctive aims and drives — including our drive to signal altruistic intentions with highly visible actions — as well as an inclination against questioning the values held by our peers. After all, such questioning would likely have been evolutionarily costly in the past, and may still feel socially costly today.

Moreover, it is very unnatural for us to be as agnostic and open-minded as we should ideally be in the face of the massive uncertainty associated with endeavors that seek to have the best impact for all sentient beings (see also Vinding, 2020, 9.1-9.2). This suggests that we may tend to be overconfident about — and too quick to conclude — that some particular direct action happens to be the optimal path for helping others.

Lastly, while some kind of omission bias plausibly causes us to discount the value of making an active effort to help others, it is not clear whether this bias counts more strongly against direct action than against research efforts aimed at helping others, since omission bias likely works against both types of action (relative to doing nothing). In fact, the omission bias might count more strongly against research, since a failure to do important research may feel like less of a harmful inaction than does a failure to pursue direct actions, whose connection to addressing urgent needs is usually much clearer.

The Big Neglected Question

There is one question that I consider particularly neglected among aspiring altruists — as though it occupies a uniquely impenetrable blindspot. I am tempted to call it “The Big Neglected Question”.

The question, in short, is whether anything can ethically outweigh or compensate for extreme suffering. Our answer to this question has profound implications for our priorities. And yet astonishingly few people seem to seriously ponder it, even among dedicated altruists. In my view, reflecting on this question is among the first, most critical steps in any systematic endeavor to improve the world. (I suspect that a key reason this question tends to be shunned is that it seems too dark, and because people may intuitively feel that it fundamentally questions all positive and meaning-giving aspects of life — although it arguably does not, as even a negative answer to the question above is compatible with personal fulfillment and positive roles and lives.)

More generally, as hinted earlier, it seems to me that reflection on fundamental values is extremely neglected among altruists. Ozzie Gooen argues that many large-scale altruistic projects are pursued without any serious exploration as to whether the projects in question are even a good way to achieve the ultimate (stated) aims of these projects, despite this seeming like a critical first question to ponder.

I would make a similar argument, only one level further down: just as it is worth exploring whether a given project is among the best ways to achieve a given aim before one pursues that project, so it is worth exploring which aims are most worth striving for in the first place. This, it seems to me, is even more neglected than is exploring whether our pet projects represent the best way to achieve our (provisional) aims. There is often a disproportionate amount of focus on impact, and comparatively little focus on what is the most plausible aim of the impact.

Conclusion

In closing, I should again stress that my argument is not that we should only do research and never act — that would clearly be a failure mode, and one that we must also be keen to steer clear of. But my point is that there are good reasons to think that it would be helpful to devote more attention to research in our efforts to improve the world, both on moral and empirical issues — especially at this early point in time.


Acknowledgments

For helpful comments, I thank Teo Ajantaival, Tobias Baumann, and Winston Oswald-Drummond.

Suffering-Focused Ethics: Defense and Implications

The reduction of suffering deserves special priority. Many ethical views support this claim, yet so far these have not been presented in a single place. Suffering-Focused Ethics provides the most comprehensive presentation of suffering-focused arguments and views to date, including a moral realist case for minimizing extreme suffering. The book then explores the all-important issue of how we can best reduce suffering in practice, and outlines a coherent and pragmatic path forward.

Amazon (Kindle, paperback, and hardcover)
Apple Books, Barnes & Noble, Kobo, Scribd, Vivlio, 24symbols, Angus & Robertson
Smashwords
Paperback PDF
Audible/Amazon (audiobook)

Suffering-Focused Ethics - 3D


“An inspiring book on the world’s most important issue. Magnus Vinding makes a compelling case for suffering-focused ethics. Highly recommended.”
— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“We live in a haze, oblivious to the tremendous moral reality around us. I know of no philosopher who makes the case more resoundingly than Magnus Vinding. In radiantly clear and honest prose, he demonstrates the overwhelming ethical priority of preventing suffering. Among the book’s many powerful arguments, I would call attention to its examination of the overlapping biases that perpetuate moral unawareness. Suffering-Focused Ethics will change its readers, opening new moral and intellectual vistas. This could be the most important book you will ever read.
Jamie Mayerfeld, professor of political science at the University of Washington, author of Suffering and Moral Responsibility and The Promise of Human Rights

“In this important undertaking, Magnus Vinding methodically and convincingly argues for the overwhelming ethical importance of preventing and reducing suffering, especially of the most intense kind, and also shows the compatibility of this view with various mainstream ethical philosophies that don’t uniquely focus on suffering. His careful analytical style and comprehensive review of existing arguments make this book valuable reading for anyone who cares about what matters, or who wishes to better understand the strong rational underpinning of suffering-focused ethics.”
— Jonathan Leighton, founder of the Organisation for the Prevention of Intense Suffering, author of The Battle for Compassion: Ethics in an Apathetic Universe

“Magnus Vinding breaks the taboo: Today, the problem of suffering is the elephant in the room, because it is at the same time the most relevant and the most neglected topic at the logical interface between applied ethics, cognitive science, and the current philosophy of mind and consciousness. Nobody wants to go there. It is not good for your academic career. Only few of us have the intellectual honesty, the mental stamina, the philosophical sincerity, and the ethical earnestness to gaze into the abyss. After all, it might also gaze back into us. Magnus Vinding has what it takes. If you are looking for an entry point into the ethical landscape, if you are ready to face the philosophical relevance of extreme suffering, then this book is for you. It gives you all the information and the conceptual tools you need to develop your own approach. But are you ready?”
Thomas Metzinger, professor of philosophy at the Johannes Gutenberg University of Mainz, author of Being No One and The Ego Tunnel

On Insects and Lexicality

“Their experiences may be more simple than ours, but are they less intense? Perhaps a caterpillar’s primitive pain when squashed is greater than our more sophisticated sufferings.”

— Richard Ryder, Painism: A Modern Morality, p. 64.

Many people, myself included, find it plausible that suffering of a certain intensity, such as torture, carries greater moral significance than any amount of mild suffering. One may be tempted to think that views of this kind imply we should primarily prioritize the beings most likely to experience these “lexically worse” states of suffering (LWS) — presumably beings with large brains.* By extension, one may think such views will generally imply little priority to beings with small, less complex brains, such as insects. (Which is probably also a view we would intuitively like to embrace, given the inconvenience of the alternative.) 

Yet while perhaps intuitive, I do not think this conclusion follows. The main argument against it, in my view, is that we should maintain a non-trivial probability that beings with small brains, such as insects, indeed can experience LWS (regardless of how we define these states). After all, on what grounds can we confidently maintain they cannot?

And if we then assume an expected value framework, and multiply the large number of insects by a non-trivial probability of them being able to experience LWS, we find that, in terms of presently existing beings, the largest amount of LWS in expectation may well be found in small beings such as insects.


* It should be noted in this context, though, that many humans ostensibly cannot feel (at least physical) pain, whereas many beings with smaller brains show every sign of having this capacity, which suggests brain size is a poor proxy for the ability to experience pain, let alone the ability to experience LWS, and that genetic variation in certain pain-modulating genes may well be a more important factor.


More literature

On insects:

The Importance of Insect Suffering
Reducing Suffering Amongst Invertebrates Such As Insects
Do Bugs Feel Pain?
How to Avoid Hurting Insects
The Moral Importance of Invertebrates Such as Insects

On Lexicality:

Value Lexicality
Clarifying lexical thresholds
Many-valued logic as a reply to sequence arguments in value theory
Lexicality between mild discomfort and unbearable suffering: A variety of possible views
Lexical priority to extreme suffering — in practice

Physics Is Also Qualia

In this post, I seek to clarify what I consider to be some common confusions about consciousness and “physics” stemming from a failure to distinguish clearly between ontological and epistemological senses of “physics”.

Clarifying Terms

Two senses of the word “physics” are worth distinguishing. There is physics in an ontological sense: roughly speaking, the spatio-temporal(-seeming) world that in many ways conforms well to our best physical theories. And then there is physics in an epistemological sense: a certain class of models we have of this world, the science of physics.

“Physics” in this latter, epistemological sense can be further divided into 1) the physical models we have in our minds, versus 2) the models we have external to our minds, such as in our physics textbooks and computer simulations. Yet it is worth noting that, to the extent we ourselves have any knowledge of the models in our books and simulations, we only have this knowledge by representing it in our minds. Thus, ultimately, all the knowledge of physical models we have, as subjects, is knowledge of the first kind: as appearances in our minds.*

In light of these very different senses of the term “physics”, it is clear that the claim that “physics is also qualia” can be understood in two very different ways: 1) in the sense that the physical world, in the ontological sense, is qualia, or “phenomenal”, and 2) that our models of physics are qualia, i.e. that our models of physics are certain patterns of consciousness. The first of these two claims is surely the most controversial one, and I shall not defend it here; I explore it here and here.

Instead, I shall here focus on the latter claim. My aim is not really to defend it, as I already briefly did that above: all the knowledge of physics we have, as subjects, ultimately appears as experiential patterns in our minds. (Although talk of the phenomenology of, say, operations in Hilbert spaces admittedly is rare.) I take this to be obvious, and hit an impasse with anyone who disagrees. My aim here is rather to clarify some confusions that arise due to a lack of clarity about this, and due to conflations of the two senses of “physics” described above.

The Problem of Reduction: Epistemological or Ontological?

I find it worth quoting the following excerpt from a Big Think interview with Sam Harris. Not because there is anything atypical about what Harris says, but rather because I think he here clearly illustrates the prevailing lack of clarity about the distinction between epistemology and ontology in relation to “the physical”.

If there’s an experiential internal qualitative dimension to any physical system then that is consciousness. And we can’t reduce the experiential side to talk of information processing and neurotransmitters and states of the brain […]. Someone like Francis Crick said famously you’re nothing but a pack of neurons. And that misses the fact that half of the reality we’re talking about is the qualitative experiential side. So when you’re trying to study human consciousness, for instance, by looking at states of the brain, all you can do is correlate experiential changes with changes in brain states. But no matter how tight these correlations become that never gives you license to throw out the first person experiential side. That would be analogous to saying that if you just flipped a coin long enough you would realize it had only one side. And now it’s true you can be committed to talking about just one side. You can say that heads being up is just a case of tails being down. But that doesn’t actually reduce one side of reality to the other.

Especially worth resting on here is the statement “half of the reality we’re talking about is the qualitative experiential side.” Yet is this “half of reality” an “ontological half” or an “epistemological half”? That is, is there a half of reality out there that is part phenomenal, and part “non-phenomenal” — perhaps “inertly physical”? Or are we rather talking about two different phenomenal descriptions of the same thing, respectively 1) physico-mathematical models of the mind-brain (and these models, again, are also qualia, i.e. patterns of consciousness), and 2) all other phenomenal descriptions, i.e. those drawing on the countless other experiential modalities we can currently conceive of — emotions, sounds, colors, etc. — as well as those we can’t? I suggest we are really talking about two different descriptions of the same thing.

A similar question can be raised in relation to Harris’ claim that we cannot “reduce one side of reality to the other.” Is the reduction in question, or rather failure of reduction, an ontological or an epistemological one? If it is ontological, then it is unclear what this means. Is it that one side of reality cannot “be” the other? This does not appear to be Harris’ view, even if he does tacitly buy into ontologically distinct sides (as opposed to descriptions) of reality in the first place.

Yet if the failure of reduction is epistemological, then there is in fact little unusual about it, as failures of epistemological reduction, or reductions from one model to another, are found everywhere in science. In the abstract sciences, for example, one axiomatic system does not necessarily reduce to another; indeed, we can readily create different axiomatic systems that not only fail to reduce to each other yet which actively contradict each other. And hence we cannot derive all of mathematics, broadly construed, from a single axiomatic system.

Similarly, in the empirical sciences, economics does not “reduce to” quantum physics. One may object that economics does reduce to quantum physics in principle, yet it should then be noted that 1) the term “in principle” does an enormous amount of work here, arguably about as much as it would have to do in the claim that “quantum physics can explain consciousness in principle” — after all, physics and economics invoke very different models and experiential modalities (economic theories are often qualitative in nature, and some prominent economists have even argued they are primarily so). And 2) a serious case can be made against the claim that even all the basic laws found in chemistry, the closest neighbor of physics, can be derived from fundamental physical theories, even in principle (see e.g. Berofsky, 2012, chap. 8). This case does not rest on there being something mysterious going on between our transition from theories of physics to theories of chemistry, nor that new fundamental forces are implicated, but merely that our models in these respective fields contain elements not reducible, even in principle, to our models in other areas.

Thus, at the level of our minds, we can clearly construct many different mental models which we cannot reduce to each other, even in principle. Yet this merely says something about our models and epistemology. It hardly comprises a deep metaphysical mystery.

Denying the Reality of Consciousness

The fact that the world conforms, at least roughly, to description in “physical” terms seems to have led some people to deny that consciousness in general exists. Yet this, I submit, is a fallacy: the fact that we can model the world in one set of terms which describe certain of its properties does not imply that we cannot describe it in another set of terms that describe other properties truly there as well, even if we cannot derive one from the other.

By analogy, consider again physics and economics: we can take the exact same object of study — say, a human society — and describe aspects of it in physical terms (with models of thermodynamics, classical mechanics, electrodynamics, etc.), yet we cannot from any such description or set of descriptions meaningfully derive a description of the economics of this society. It would clearly be a fallacy to suggest that this implies facts of economics cannot exist.

Again, I think the confusion derives from conflating epistemology with ontology: “physics”, in the epistemological sense of “descriptions of the world in physico-mathematical terms”, appears to encompass “everything out there”, and hence, the reasoning goes, nothing else can exist out there. Of course, in one sense, this is true: if a description in physico-mathematical terms exhaustively describes everything out there, then there is indeed nothing more to be said about it — in physico-mathematical terms. Yet this says nothing about the properties of what is out there in other terms, as illustrated by the economics example above. (Another reason some people seem to deny the reality of consciousness, distinct from conflation of the epistemological and the ontological, is “denial due to fuzziness”, which I have addressed here.)

This relates, I think, to the fundamental Kantian insight on epistemology: we never experience the world “out there” directly, only our own models of it. And the fact that our physical model of the world — including, say, a physical model of the mind-brain of one’s best friend — does not entail other phenomenal modalities, such as emotions, by no means implies that the real, ontological object out there which our physical model reflects, such as our friend’s actual mind-brain, does not instantiate these things. That would be to confuse the map with the territory. (Our emotional model of our best friend does, of course, entail emotions, and it would be just as much of a fallacy to say that, since such emotional models say nothing about brains in physical terms, descriptions of the latter kind have no validity.)

Denials of this sort can have serious ethical consequences, not least since the most relevant aspects of consciousness, including suffering, fall outside descriptions of the world in purely physical terms. Thus, if we insist that only such physico-mathematical descriptions truly describe the world, we seem forced to conclude that suffering, along with everything else that plausibly has moral significance, does not truly exist. Which, in turn, can keep us from working toward a sophisticated understanding of these things, and from creating a better world accordingly.

 


* And for this reason, the answer to the question “how do you know you are conscious?” will ultimately be the same as the answer to the question “how do you know physics (i.e. physical models) exist?” — we experience these facts directly.

Narrative Self-Deception: The Ultimate Elephant in the Brain?

the elephant in the brain, n. An important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

The Elephant in the Brain is an informative and well-written book, co-authored by Kevin Simler and Robin Hanson. It explains why much of our behavior is driven by unflattering, hidden motives, as well as why our minds are built to be unaware of these motives. In short: because a mind that is ignorant about what drives it and how it works is often more capable of achieving the aims it was built to achieve.

Beyond that, the book also seeks to apply this knowledge to shed some light on many of our social institutions to show that they are often not mostly about what we think they are. Rather than being about high-minded ideals and other pretty things that we like to say they are about, our institutions often serve much less pretty, more status-driven purposes, such as showing off in various ways, as well as to help us better get by in a tough world (for instance, the authors argue that religion in large part serves to bind communities together, and in this way can help bring about better life outcomes for believers).

All in all, I think The Elephant in the Brain provides a strong case for supplementing one’s mental toolkit with a new, important tool, namely to continuously ask: how might my mind skillfully be avoiding confrontation with ugly truths about myself that I would prefer not to face? And how might such unflattering truths explain aspects of our public institutions and public life in general?

This is an important lesson, I think, and it makes the book more than worth reading. At the same time, I cannot help but feel that the book ultimately falls short when it comes to putting this tool to proper use. For the main critique that came to my mind while reading the book was that it seemed to ignore the biggest elephant in the brain by far — the elephant I suspect we would all prefer to ignore the most — and hence it failed, in my view, to take a truly deep and courageous look at the human condition. In fact, the book even seemed be a mouthpiece for this great elephant.

The great elephant I have in mind here is a tacitly embraced sentiment that goes something like: life is great, and we are accomplishing something worthwhile. As the authors write: “[…] life, for must of us, is pretty good.” (p. 11). And they end the book on a similar note:

In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

This seems to implicitly assume that what humans have managed to achieve, such as cooperating (i.e. two superpowers with nuclear weapons pointed at each other competing) their way to the moon, has been worthwhile all things considered. Might this, however, be a flippant elephant talking — rather than, say, a conclusion derived via a serious, scholarly analysis of our condition?

As a meta-observation, I would note that the fact that people often get offended and become defensive when one even just questions the value of our condition — and sometimes also accuse the one raising the question of having a mental illness — suggests that we may indeed be disturbing a great elephant here: something we would strongly prefer not to think too deeply about. (For the record, with respect to mental health, I think one can be among the happiest, most mentally healthy people on the planet and still think that a sober examination of the value of our condition yields a negative answer, although it may require some disciplined resistance against the pulls of a strong elephant.)

It is important to note here that one should not confuse the cynicism required for honest exploration of the human condition with misanthropy, as Simler and Hanson themselves are careful to point out:

The line between cynicism and misanthropy—between thinking ill of human motives and thinking ill of humans—is often blurry. So we want readers to understand that although we may often be skeptical of human motives, we love human beings. (Indeed, many of our best friends are human!) […] All in all, we doubt an honest exploration will detract much from our affection for [humans]. (p. 13)

Similarly, an honest and hard-nosed effort to assess the value of human life and the human endeavor need not lead us to have any less affection and compassion for humans. Indeed, it might lead us to have much more of both in many ways.

Is Life “Pretty Good”?

With respect to Simler’s and Hanson’s claim that “”[…] life, for must of us, is pretty good”, it can be disputed that this is indeed the case. According to the 2017 World Happiness Report, a significant plurality of people rated their life satisfaction at five on a scale from zero to ten, which arguably does not translate to being “pretty good”. Indeed, one can argue that the scale employed in this report is biased, in that it does not allow for a negative evaluation of life. And one may further argue that if this scale instead ranged from minus five to plus five (i.e. if one transposed this zero-to-ten scale so as to make it symmetrical around zero), it may be that a plurality would rate their lives at zero. That is, after all, where the plurality would lie if one were to make this transposition on the existing data measured along the zero-to-ten scale (although it seems likely that people would have rated their life satisfaction differently if the scale had been constructed in this symmetrical way).

But even if we were to concede that most people say that their lives are pretty good, one can still reasonably question whether most people’s lives indeed are pretty good, and not least reasonably question whether such reports imply that the human condition is worthwhile in a broader sense.

Narrative Self-Deception: Is Life As Good As We Think?

Just as it is possible for us to be wrong about our own motives, as Simler and Hanson convincingly argue, could it be that we can also be wrong about how good our lives are? And, furthermore, could it be that we not only can be wrong but that most of us in fact are wrong about it most of the time? This is indeed what some philosophers argue, seemingly supported by psychological evidence.

One philosopher who has argued along these lines is Thomas Metzinger. In his essay “Suffering“, Metzinger reports on a pilot study he conducted in which students were asked at random times via their cell phones whether they would relive the experience they had just before their phone vibrated. The results were that, on average, students reported that their experience was not worth reliving 72 percent of the time. Metzinger uses this data, which he admits does not count as significant, as a starting point for a discussion on how our grosser narrative about the quality of our lives might be out of touch with the reality of our felt, moment-to-moment experience:

If, on the finest introspective level of phenomenological granularity that is functionally available to it, a self-conscious system would discover too many negatively valenced moments, then this discovery might paralyse it and prevent it from procreating. If the human organism would not repeat most individual conscious moments if it had any choice, then the logic of psychological evolution mandates concealment of the fact from the self-modelling system caught on the hedonic treadmill. It would be an advantage if insights into the deep structure of its own mind – insights of the type just sketched – were not reflected in its conscious self-model too strongly, and if it suffered from a robust version of optimism bias. Perhaps it is exactly the main function of the human self-model’s higher levels to drive the organism continuously forward, to generate a functionally adequate form of self-deception glossing over everyday life’s ugly details by developing a grandiose and unrealistically optimistic inner story – a “narrative self-model” with which we can identify? (pp. 6-7)

Metzinger continues to conjecture that we might be subject to what he calls “narrative self-deception” — a self-distracting strategy that keeps us from getting a realistic view of the quality and prospects of our lives:

[…] a strategy of flexible, dynamic self­-representation across a hierarchy of timescales could have a causal effect in continuously remotivating the self-­conscious organism, systematically distracting it from the potential insight that the life of an anti-­entropic system is one big uphill battle, a strenuous affair with minimal prospect of enduring success. Let us call this speculative hypothesis “narrative self­-deception”. (p. 7)

If this holds true, such self-deception would seem to more than satisfy the definition of an elephant in the brain in Simler and Hanson’s sense: “an important but un­ack­now­ledged fea­ture of how our minds work; an introspective taboo.”

To paraphrase Metzinger: the mere fact that we find life to be “pretty good” when we evaluate it all from the vantage point of a single moment does not mean that we in fact find most of our experiences “pretty good”, or indeed even worth (re)living most of the time, moment-to-moment. Our single-moment evaluations of the quality of the whole thing may well tend to be gross, self-deceived overestimates.

Another philosopher who makes a similar case is David Benatar, who in his book Better Never to Have Been argues that we tend to overestimate the quality of our lives due to well-documented psychological biases:

The first, most general and most influential of these psychological phenomena is what some have called the Pollyanna Principle, a tendency towards optimism. This manifests in many ways. First, there is an inclination to recall positive rather than negative experiences. For example, when asked to recall events from throughout their lives, subjects in a number of studies listed a much greater number of positive than negative experiences. This selective recall distorts our judgement of how well our lives have gone so far. It is not only assessments of our past that are biased, but also our projections or expectations about the future. We tend to have an exaggerated view of how good things will be. The Pollyannaism typical of recall and projection is also characteristic of subjective judgements about current and overall well-being. Many studies have consistently shown that self-assessments of well-being are markedly skewed toward the positive end of the spectrum. […] Indeed, most people believe that they are better off than most others or than the average person. (pp. 64-66)

Is “Pretty Good” Good Enough?

Beyond doubting whether most people would indeed say that their lives are “pretty good”, and beyond doubting that a single moment’s assessment of one’s quality of life actually reflects this quality particularly well, one can also question whether a life that is rated as “pretty good”, even in the vast majority of moments, is indeed good enough.

This is, for example, not necessarily the case on the so-called tranquilist view of value, according to which our experiences are valuable to the extent they are absent of suffering, and hence that happiness and pleasure are valuable to the extent they chase suffering away.

Similar to Metzinger’s point about narrative self-deception, one can argue that, if the tranquilist view holds true about how we feel the value of our experience moment-to-moment (upon closer, introspective inspection), we should probably expect to be quite blind to this fact. And interesting to note in this context is it that many of the traditions which have placed the greatest emphasis on paying attention to the nature of subjective experience moment-to-moment, such as Buddhism, have converged toward a view very similar to tranquilism.

Can the Good Lives Outweigh the Bad?

One can also question the value of our condition on a more collective level, by focusing not only on a single (self-reportedly) “pretty good” life but on all individual lives. In particular, we can question whether the good lives of some, indeed even a large majority, can justify the miserable lives of others.

A story that gives many people pause on this question is Ursula K. Le Guin’s The Ones Who Walk Away from Omelas. The story is about a near-paradisiacal city in which everyone lives deeply meaningful and fulfilling lives — that is, everyone except a single child who is locked in a basement room, forced to live a life of squalor:

The child used to scream for help at night, and cry a good deal, but now it only makes a kind of whining, “eh-haa, eh-haa,” and it speaks less and less often. It is so thin there are no calves to its legs; its belly protrudes; it lives on a half-bowl of corn meal and grease a day. It is naked. Its buttocks and thighs are a mass of festered sores, as it sits in its own excrement continually.

The story’s premise is that this child must exist in this condition for the happy people of Omelas to enjoy their wonderful lives, which then raises the question of whether these wonderful lives can in any sense outweigh and justify the miserable life of this single child. Some citizens of Omelas seem to decide that this is not the case: the ones who walk away from Omelas. And many people in the real world seem to agree with this decision.

Sadly, our world is much worse than the city of Omelas on every measure. For example, in the World Happiness Report cited above, around 200 million people reported their quality of life to be in the absolute worst category. If the story of Omelas gives us pause, we should also think twice before claiming that the “pretty good” lives of some people can outweigh the self-reportedly very bad lives of these hundreds of millions of people, many of whom end up committing suicide (and again, it should be remembered that a great plurality of humanity rated their life satisfaction to be exactly in the middle of the scale, while a significant majority rated it in the middle or lower).

Rating of general life satisfaction aside, one can also reasonably question whether anything can outweigh the many instances of extreme suffering that occur every single day, something that can indeed befall anyone, regardless of one’s past self-reported life satisfaction.

Beyond that, one can also question whether the “pretty good” lives of some humans can in any sense outweigh and justify the enormous amount of suffering humanity imposes on non-human animals, including the torturous suffering we subject more than a trillion fish to each year, as well as the suffering we impose upon the tens of billions of chickens and turkeys who live out their lives under the horrific conditions of factory farming, many of whom end their lives by being boiled alive. Indeed, there is no justification for not taking humanity’s impact on non-human animals — the vast majority of sentient beings on the planet — into consideration as well when assessing the value of our condition.

 

My main purpose in this essay has not been to draw any conclusions about the value of our condition. Rather, my aim has merely been to argue that we likely have an enormous elephant in our brain that causes us to evaluate our lives, individually as well as collectively, in overoptimistic terms (though some of us perhaps do not), and to ignore the many considerations that might suggest a negative conclusion. An elephant that leads us to eagerly assume that “it’s all pretty good and worthwhile”, and to flinch away from serious, sober-minded engagement with questions concerning the value of our condition, including whether it would be better if there had been no sentient beings at all.

Why I Used to Consider the Absence of Sentience Tragic

Whether one considers the absence of sentience bad or neutral — or indeed as good as can be — can matter a lot for one’s ethical and altruistic priorities. Specifically, it can have significant implications for whether one should push for smaller or larger future populations.

I used to be a classical utilitarian. Which is to say, I used to agree with the statement “we ought to maximize the net amount of happiness minus suffering in the world”. And given this view, I found it a direct, yet counter-intuitive implication that the absence of sentience is tragic, and something we ought to minimize by bringing about a maximally large, maximally happy population. My aim in this essay is to briefly present what I consider the main reason why I used to believe this, and also to explain why I no longer hold this view. I am not claiming the reasons I had for believing this are shared by other classical utilitarians, yet I suspect they could be, at least by some.

The Reason: Striving for Consistency

My view that the absence of sentience is tragic and something we ought to prevent mostly derived, I believe, from a wish to be consistent. Given the ostensibly reasonable assumption that death is bad, it would seem to follow, I reasoned, that since death merely amounts to a discontinuation of life — or, seen in a larger perspective, a reduction of the net amount of sentience — the reduction of sentience caused by not giving birth to a new (happy) life should be considered just as bad as the end of a (happy) life. This was counter-intuitive, of course, yet I did not, and still do not, consider immediate intuitions to be the highest arbiters of moral wisdom, and so it did not seem that weird to accept this conclusion. The alternative, if I were to be consistent, would be to bring my view of death in line with my intuition that the absence of sentience is not bad. Yet this was too implausible, since death surely is bad.

This, I believe, was the reasoning behind my considering it a moral obligation to produce a large, happy population. To not do it would, in some ways, be the moral equivalent of committing genocide. My view is quite different now, however.

My Current View of My Past View

I now view this past reasoning of mine as akin to a deceptive trick, like a math riddle where one has to find where the error was made in a series of seemingly valid deductions. You accept that death is tragic. Death means less sentient life than continued life, other things being equal. But a failure to bring a new individual into the world also means less sentient life, other things being equal. So why would you not consider a failure to bring an individual into the world tragic as well?

My current response to this line of reasoning is that death indeed is bad, yet that it is not intrinsically so. What is bad about death, I would argue, is the suffering it causes; not the discontinuation of sentience per se (after all, a discontinuation of sentience occurs every night we go to sleep, which we rarely consider bad, much less tragic). This view is perfectly consistent with the view that it is not tragic to fail to create a new individual.

As I have argued elsewhere, it is somewhat to be expected that we humans consider the death of a close relative or group member to be tragic and highly worth avoiding, given that such a death would tend, evolutionarily speaking, to have been costly to our own biological success in the past. In other words, our view that death is tragic may in large part stem from a penalizing mechanism instilled in us by evolution to prevent us from losing fellow assets who served our hidden biological imperative — assets who had invested a lot into us and whom we had invested a lot into in return. And I believe that my considering the absence of sentience tragic was, crudely speaking, a matter of extending this penalizing mechanism so that it pertained to all insentient parts of the universe. An extension I now consider misguided. I now see nothing tragic whatsoever about the fact that there is no sentient life on Mars.

Other Reasons

There may, of course, be other reasons why a classical utilitarian, including my past self, would consider the absence of sentience tragic. For instance, it seems reasonable to suspect us, or at least many of us, to have an inbuilt drive to maximize the number of our own descendants, or to maximize the future success of our own tribe (the latter goal would probably have aligned pretty well with the former throughout our evolutionary history). It is not clear what would count as “our own tribe” in modern times, yet it seems that many people, including many classical utilitarians, now view humanity as their notional tribe.

A way to control for such a hidden drive, then, would be to ask whether we would accept if the universe were filled up with happy beings who do not belong to our own tribe. For example, would we accept if our future light cone were filled up by happy aliens who, in their quest to maximize net happiness, replaced human civilization with happier beings? (i.e. a utilitronium shockwave of sorts.) An impartial classical utilitarian would happily accept this. The question is whether a human classical utilitarian would too?

Blog at WordPress.com.

Up ↑