Essays on UFOs and Related Conjectures: Reported Evidence, Theoretical Considerations, and Potential Importance

Essays on UFOs and Related Conjectures invites readers to reflect on their beliefs and intuitions concerning extraterrestrial intelligence. The essays in this collection explore the extraterrestrial UFO hypothesis, optimized futures, and possible motives for a hypothetical extraterrestrial presence around Earth. Some of the essays also delve into the potential moral implications of such a presence. Overall, this collection makes a case for taking the extraterrestrial hypothesis seriously and for further exploring the evidence, theoretical considerations, and moral implications that may relate to this hypothesis.

The essays found in this collection are the following:

The book is available as a free PDF (1st edition; 2nd edition with a new chapter). It is also available for free on Amazon, Smashwords, Apple Books, Barnes & Noble, and elsewhere.


Thoughts on AI pause

Whether to push for an AI pause is a hotly debated question. This post contains some of my thoughts on the issue of AI pause and the discourse that surrounds it.


Contents

  1. The motivation for an AI pause
  2. My thoughts on AI pause, in brief
  3. My thoughts on AI pause discourse
  4. Massive moral urgency: Yes, in both categories of worst-case risks

The motivation for an AI pause

Generally speaking, it seems that the primary motivation behind pushing for an AI pause is that work on AI safety is far from where it needs to be for humanity to maintain control of future AI progress. Therefore, a pause is needed so that work on AI safety — and other related work, such as AI governance — can catch up with the pace of progress in AI capabilities.

My thoughts on AI pause, in brief

Whether it is worth pushing for an AI pause obviously depends on various factors. For one, it depends on the opportunity cost: what could we be doing otherwise? After all, even if one thinks that an AI pause is desirable, one might still have reservations about its tractability compared to other aims. And even if one thinks that an AI pause is both desirable and tractable, there might still be other aims and activities that are even more beneficial (in expectation), such as working on worst-case AI safety (Gloor, 2016; Yudkowsky, 2017; Baumann, 2018), or increasing the priority that people devote to reducing risks of astronomical suffering (s-risks) (Althaus & Gloor, 2016; Baumann 2017; 2022; DiGiovanni, 2021).

Furthermore, there is the question of whether an AI pause would even be beneficial in the first place. This is a complicated question, and I will not explore it in detail here. (For a critical take, see “AI Pause Will Likely Backfire” by Nora Belrose.) Suffice it to say that, in my view, it seems highly uncertain whether any realistic AI pause would be beneficial overall — not just from a suffering-focused perspective, but from the perspective of virtually all impartial value systems. It seems to me that most advocates for AI pause are quite overconfident on this issue.

But to clarify, I am by no means opposed to advocating for an AI pause. It strikes me as something that one can reasonably conclude is helpful and worth doing (depending on one’s values and empirical judgement calls). But my current assessment is just that it is unlikely to be among the best ways to reduce future suffering, mainly because I view the alternative activities outlined above as being more promising, and because I suspect that most realistic AI pauses are unlikely to be clearly beneficial overall.

My thoughts on AI pause discourse

A related critical observation about much of the discourse around AI pause is that it tends toward a simplistic “doom vs. non-doom” dichotomy. That is, the picture that is conveyed seems to be that either humanity loses control of AI and goes extinct, which is bad; or humanity maintains control, which is good. And your probability of the former is your “p(doom)”.

Of course, one may argue that for strategic and communication purposes, it makes sense to simplify things and speak in such dichotomous terms. Yet the problem, in my view, is that this kind of picture is not accurate even to a first approximation. From an altruistic perspective, it is not remotely the case that “loss of control to AI” = “bad”, while “humans maintaining control” = “good”.

For example, if we are concerned with the reduction of s-risks (which is important by the lights of virtually all impartial value systems), we must compare the relative risks of “loss of control to AI” with the risks of “humans maintaining control” — however we define these rough categories. And sadly, it is not the case that “humans maintaining control” is associated with a negligible or trivial risk of worst-case outcomes. Indeed, it is not clear whether “humans maintaining control” is generally associated with better or worse prospects than “loss of control to AI” when it comes to s-risks.

In general, the question of whether a “human-controlled future” is better or worse with respect to reducing future suffering is a difficult one that has been discussed and debated at some length, and no clear consensus has emerged. As a case in point, Brian Tomasik places a 52 percent subjective probability on the claim that “Human-controlled AGI in expectation would result in less suffering than uncontrolled”.

This near-50/50 view stands in stark contrast to what often seems assumed as a core premise in much of the discourse surrounding AI pause, namely that a human-controlled future would obviously be far better (in expectation).

(Some reasons why one might be pessimistic regarding human-controlled futures can be found in the literature on human moral failings; see e.g. Cooper, 2018; Huemer, 2019; Kidd, 2020; Svoboda, 2022. Other reasons include basic competitive aims and dynamics that are likely to be found in a wide range of futures, including human-controlled ones; see e.g. Tomasik, 2013; Knutsson, 2022, sec. 3. See also Vinding, 2022.)

Massive moral urgency: Yes, in both categories of worst-case risks

There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.

At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks. (To be clear, this is not to say that an s-risk outcome is the most likely outcome in any of these two classes of future scenarios, but merely that the current trajectory looks highly suboptimal and concerning with respect to both of them.)

The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks, and as hinted earlier, it seems doubtful to me that pushing for an AI pause is the best way to reduce these risks overall.

What might we infer about optimized futures?

It is plausible to assume that technology will keep on advancing along various dimensions until it hits fundamental physical limits. We may refer to futures that involve such maxed-out technological development as “optimized futures”.

My aim in this post is to explore what we might be able to infer about optimized futures. Most of all, my aim is to advance this as an important question that is worth exploring further.


Contents

  1. Optimized futures: End-state technologies in key domains
  2. Why optimized futures are plausible
  3. Why optimized futures are worth exploring
  4. What can we say about optimized futures?
    1. Humanity may be close to (at least some) end-state technologies
    2. Optimized civilizations may be highly interested in near-optimized civilizations
    3. Strong technological convergence across civilizations?
    4. If technology stabilizes at an optimum, what might change?
    5. Information that says something about other optimized civilizations as an extremely coveted resource?
  5. Practical implications?
    1. Prioritizing values and institutions rather than pushing for technological progress?
    2. More research
  6. Conclusion
  7. Acknowledgments

Optimized futures: End-state technologies in key domains

The defining feature of optimized futures is that they entail end-state technologies that cannot be further improved in various key domains. Some examples of these domains include computing power, data storage, speed of travel, maneuverability, materials technology, precision manufacturing, and so on.

Of course, there may be significant tradeoffs between optimization across these respective domains. Likewise, there could be forms of “ultimate optimization” that are only feasible at an impractical cost — say, at extreme energy levels. Yet these complications are not crucial in this context. What I mean by “optimized futures” are futures that involve practically optimal technologies within key domains (such as those listed above).

Why optimized futures are plausible

There are both theoretical and empirical reasons to think that optimized futures are plausible (by which I here mean that they are at least somewhat probable — perhaps more than 10 percent likely).

Theoretically, if the future contains advanced goal-driven agents, we should generally expect those agents to want to achieve their goals in the most efficient ways possible. This in turn predicts continual progress toward ever more efficient technologies, at least as long as such progress is cost-effective.

Empirically, we have an extensive record of goal-oriented agents trying to improve their technology so as to better achieve their aims. Humanity has gone from having virtually no technology to creating a modern society surrounded by advanced technologies of various kinds. And even in our modern age of advanced technology, we still observe persistent incentives and trends toward further improvements in many domains of technology — toward better computers, robots, energy technology, and so on.

It is worth noting that the technological progress we have observed throughout human history has generally not been the product of some overarching collective plan that was deliberately aimed at technological progress. Instead, technological progress has in some sense been more robust than that, since even in the absence of any overarching plan, progress has happened as the result of ordinary demands and desires — for faster computers, faster and safer transportation, cheaper energy, etc.

This robustness is a further reason to think that optimized futures are plausible: even without any overarching plan aimed toward such a future, and even without any individual human necessarily wanting continued technological development leading to an optimized future, we might still be pulled in that direction all the same. And, of course, this point about plausibility applies to more than just humans: it applies to any set of agents who will be — or have been — structuring themselves in a sufficiently similar way so as to allow their everyday demands to push them toward continued technological development.

An objection against the plausibility of optimized futures is that there might be a lot of hidden potential for progress far beyond what our current understanding of physics seems to allow. However, such hidden potential would presumably be discovered eventually, and it seems probable that such hidden potential would likewise be exhausted at some point, even if it may happen later and at more extreme limits than we currently envision. That is, the broad claim that there will ultimately be some fundamental limits to technological development is not predicated on the more narrow claim that our current understanding of those limits is necessarily correct; the broader claim is robust to quite substantial extensions of currently envisioned limits. Indeed, the claim that there will be no fundamental limits to future technological development overall seems a stronger and less empirically grounded claim than does the claim that there will be such limits (cf. Lloyd, 2000; Krauss & Starkman, 2004).

Why optimized futures are worth exploring

The plausibility of optimized futures is one reason to explore them further, and arguably a sufficient reason in itself. Another reason is the scope of such futures: the futures that contain the largest numbers of sentient beings will most likely be optimized futures, suggesting that we have good reason to pay disproportionate attention to such futures, beyond what their degree of plausibility might suggest.

Optimized futures are also worth exploring given that they seem to be a likely point of convergence for many different kinds of technological civilizations. For example, an optimized future seems a plausible outcome of both human-controlled and AI-controlled Earth-originating civilizations, and it likewise seems a plausible outcome of advanced alien civilizations. Thus, a better understanding of optimized futures can potentially apply robustly to many different kinds of future scenarios.

An additional reason it is worth exploring optimized futures is that they overall seem quite neglected, especially given how plausible and consequential such futures appear to be. While some efforts have been made to clarify the physical limits of technology (see e.g. Sandberg, 1999; Lloyd, 2000; Krauss & Starkman, 2004), almost no work has been done on the likely trajectories and motives of civilizations with optimized technology, at least to my knowledge.

Lastly, the assumption of optimized technology is a rather strong constraint that might enable us to say quite a lot about futures that conform to that assumption, suggesting that this could be a fruitful perspective to adopt in our attempts to think about and predict the future.

What can we say about optimized futures?

The question of what we can say about optimized futures is a big one that deserves elaborate analysis. In this section, I will merely raise some preliminary points and speculative reflections.

Humanity may be close to (at least some) end-state technologies

One point that is worth highlighting is that a continuation of current rates of progress seems to imply that humanity could develop end-state technologies in information processing power within a few hundred years, perhaps 250 years at most (if current growth rates persist and assuming that our current understanding of the relevant physics is largely correct).

So at least in this important respect, and under the assumption of continued steady growth, humanity is surprisingly close to reaching an optimized future (cf. Lloyd, 2000).

Optimized civilizations may be highly interested in near-optimized civilizations

Such potential closeness to an optimized future could have significant implications in various ways. For example, if, hypothetically, there exists an older civilization that has already reached a state of optimized technology, any younger civilization that begins to approach optimized technologies within the same cosmic region would likely be of great interest to that older civilization.

One reason it might be of interest is that the optimized technologies of the younger civilization could potentially become competitive with the optimized technologies of the older civilization, and hence the older civilization may see a looming threat in the younger civilization’s advance toward such technologies. After all, since optimized technologies would represent a kind of upper bound of technological development, it is plausible that different instances of such technologies could be competitive with each other regardless of their origins.

Another reason the younger civilization might be of interest is that its trajectory could provide valuable information regarding the likely trajectories and goals of distant optimized civilizations that the older civilization may encounter in the future. (More on this point here.)

Taken together, these considerations suggest that if a given civilization is approaching optimized technology, and if there is an older civilization with optimized technology in its vicinity, this older civilization should take an increasing interest in this younger civilization so as to learn about it before the older civilization might have to permanently halt the development of the younger one.

Strong technological convergence across civilizations?

Another implication of optimized futures is that the technology of advanced civilizations across the universe might be remarkably convergent. Indeed, there are already many examples of convergent evolution in biology on Earth (e.g. eyes and large brains evolving several times independently). Likewise, many cases of convergence are found in cultural evolution in both early history (e.g. the independent emergence of farming, cities, and writing across the globe) as well as in recent history (e.g. independent discoveries in science and mathematics).

Yet the degree of convergence could well be even more pronounced in the case of the end-state technologies of advanced civilizations. After all, this is a case where highly advanced agents are bumping up against the same fundamental constraints, and the optimal engineering solutions in the face of these constraints will likely converge toward the same relatively narrow space of optimal designs — or at least toward the same narrow frontier of optimal designs given potential tradeoffs between different abilities.

In other words, the technologies of advanced civilizations might be far more similar and more firmly dictated by fundamental physical limits than we intuitively expect, especially given that we in our current world are used to seeing continually changing and improving technologies.

If technology stabilizes at an optimum, what might change?

The plausible convergence and stabilization of technological hardware also raises the interesting question of what, if anything, might change and vary in optimized futures.

This question can be understood in at least two distinct ways: what might change or vary across different optimized civilizations, and what might change over time within such civilizations? And note that prevalent change of the one kind need not imply prevalent change of the other kind. For example, it is conceivable that there might be great variation across civilizations, yet virtually no change in goals and values over time within civilizations (cf. “lock-in scenarios”).

Conversely, it is conceivable that goals and values change greatly over time within all optimized civilizations, yet such change could in principle still be convergent across civilizations, such that optimized civilizations tend to undergo roughly the same pattern of changes over time (though such convergence admittedly seems unlikely conditional on there being great changes over time in all optimized civilizations).

If we assume that technological hardware becomes roughly fixed, what might still change and vary — both over time and across different civilizations — includes the following (I am not claiming that this is an exhaustive list):

  • Space expansion: Civilizations might expand into space so as to acquire more resources; and civilizations may differ greatly in terms of how much space they manage to acquire.
  • More or different information: Knowledge may improve or differ over time and space; even if fundamental physics gets solved fairly quickly, there could still be knowledge to gain about, for example, how other civilizations tend to develop.
    • There would presumably also be optimization for information that is useful and actionable. After all, even a technologically optimized probe would still have limited memory, and hence there would be a need to fill this memory with the most relevant information given its tasks and storage capacity.
  • Different algorithms: The way in which information is structured, distributed, and processed might evolve and vary over time and across civilizations (though it is also conceivable that algorithms will ultimately converge toward a relatively narrow space of optima).
  • Different goals and values: As mentioned above, goals and values might change and vary, such as due to internal or external competition, or (perhaps less likely) through processes of reflection.

In other words, even if everyone has — or is — practically the same “iPhone End-State”, what is running on these iPhone End-States, and how many of them there are, may still vary greatly, both across civilizations and over time. And these distinct dimensions of variation could well become the main focus of optimized civilizations, plausibly becoming the main dimensions on which civilizations seek to develop and compete.

Note also that there may be conflicts between improvements along these respective dimensions. For example, perhaps the most aggressive forms of space expansion could undermine the goal of gaining useful information about how other civilizations tend to develop, and hence advanced civilizations might avoid or delay aggressive expansion if the information in question would be sufficiently valuable (cf. the “info gain motive”). Or perhaps aggressive expansion would pose serious risks at the level of a civilization’s internal coordination and control, thereby risking a drift in goals and values.

In general, it seems worth trying to understand what might be the most coveted resources and the most prioritized domains of development for civilizations with optimized technology. 

Information that says something about other optimized civilizations as an extremely coveted resource?

As hinted above, one of the key objectives of a civilization with optimized technology might be to learn, directly or indirectly, about other civilizations that it could encounter in the future. After all, if a civilization manages to both gain control of optimized technology and avoid destructive internal conflicts, the greatest threat to its apex status over time will likely be other civilizations with optimized technology. More generally, the main determinant of an optimized civilization’s success in achieving its goals — whether it can maintain an unrivaled apex status or not — could well be its ability to predict and interact gainfully with other optimized civilizations.

Thus, the most precious resource for any civilization with optimized technology might be information that can prepare this civilization for better exchanges with other optimized agents, whether those exchanges end up being cooperative, competitive, or outright aggressive. In particular, since the technology of optimized civilizations is likely to be highly convergent, the most interesting features to understand about other civilizations might be what kinds of institutions, values, decision procedures, and so on they end up adopting — the kinds of features that seem more contingent.

But again, I should stress that I mention these possibilities as speculative conjectures that seem worth exploring, not as confident predictions.

Practical implications?

In this section, I will briefly speculate on the implications of the prospect of optimized futures. Specifically, what might this prospect imply in terms of how we can best influence the future?

Prioritizing values and institutions rather than pushing for technological progress?

One implication is that there may be limited long-term payoffs in pushing for better technology per se, and that it might make more sense to prioritize the improvement of other factors, such as values and institutions. That is, if the future is in any case likely to be headed toward some technological optimum, and if the values and institutions (etc.) that will run this optimal technology are more contingent and “up for grabs”, then it arguably makes sense to prioritize those more contingent aspects.

To be clear, this is not to say that values and institutions will not also be subject to significant optimization pressures that push them in certain directions, but these pressures will plausibly still be weaker by comparison. After all, a wide range of values will imply a convergent incentive to create optimized technology, yet optimized technology seems compatible with a wide range of values and institutions. And it is not clear that there is a similarly strong pull toward some “optimized” set of values or institutions given optimized technology.

This perspective is arguably also supported by recent history. For example, we have seen technology improve greatly, with computing power heading in a clear upward direction over the past decades. Yet if we look at our values and institutions, it is much less clear whether they have moved in any particular direction over time, let alone an upward direction. Our values and institutions seem to have faced much less of a directional pressure compared to our technology.

More research

Perhaps one of the best things we can do to make better decisions with respect to optimized futures is to do research on such futures. The following are some broad questions that might be worth exploring:

  • What are the likely features and trajectories of optimized futures?
    • Are optimized futures likely to involve conflicts between different optimized civilizations?
    • Other things being equal, is a smaller or a larger number of optimized civilizations generally better for reducing risks of large-scale conflicts?
    • More broadly, is a smaller or larger number of optimized civilizations better for reducing future suffering?
  • What might the likely features and trajectories of optimized futures imply in terms of how we can best influence the future?
  • Are there some values or cooperation mechanisms that would be particularly beneficial to instill in optimized technology?
    • If so, what might they be, and how can we best work to ensure their (eventual) implementation?

Conclusion

The future might in some ways be more predictable than we imagine. I am not claiming to have drawn any clear or significant conclusions about how optimized futures are likely to unfold; I have mostly aired various conjectures. But I do think the question is valuable, and that it may provide a helpful lens for exploring how we can best impact the future.

Acknowledgments

Thanks to Tobias Baumann for helpful comments.

Reasons to doubt that suffering is ontologically prevalent

It is sometimes claimed that we cannot know whether suffering is ontologically prevalent — for example, we cannot rule out that suffering might exist in microorganisms such as bacteria, or even in the simplest physical processes. Relatedly, it has been argued that we cannot trust common-sense views and intuitions regarding the physical basis of suffering.

I agree with the spirit of these arguments, in that I think it is true that we cannot definitively rule out that suffering might exist in bacteria or fundamental physics, and I agree that we have good reasons to doubt common-sense intuitions about the nature of suffering. Nevertheless, I think discussions of expansive views of the ontological prevalence of suffering often present a somewhat unbalanced and, in my view, overly agnostic view of the physical basis of suffering. (By “expansive views”, I do not refer to views that hold that, say, insects are sentient, but rather views that hold that suffering exists in considerably simpler systems, such as in bacteria or fundamental physics.)

While we cannot definitively rule out that suffering might be ontologically prevalent, I do think that we have strong reasons to doubt it, as well as to doubt the practical importance of this possibility. My goal in this post is to present some of these reasons.


Contents

  1. Counterexamples: People who do not experience pain or suffering
  2. Our emerging understanding of pain and suffering
  3. Practical relevance

Counterexamples: People who do not experience pain or suffering

One argument against the notion that suffering is ontologically prevalent is that we seem to have counterexamples in people who do not experience pain or suffering. For example, various genetic conditions seemingly lead to a complete absence of pain and/or suffering. This, I submit, has significant implications for our views of the ontological prevalence (or non-prevalence) of suffering.

After all, the brains of these individuals include countless subatomic particles, basic biological processes, diverse instances of information processing, and so on, suggesting that none of these are in themselves sufficient to generate pain or suffering.

One might object that the brains of such people could be experiencing suffering — perhaps even intense suffering — that these people are just not able to consciously access. Yet even if we were to grant this claim, it does not change the basic argument that generic processes at the level of subatomic particles, basic biology, etc. do not seem sufficient to create suffering. For the processes that these people do consciously access presumably still entail at least some (indeed probably countless) subatomic particles, basic biological processes, electrochemical signals, different types of biological cells, diverse instances of information processing, and so on. This gives us reason to doubt all views that see suffering as an inherent or generic feature of processes at any of these (quite many) respective levels.

Of course, this argument is not limited to people who are congenitally unable to experience suffering; it applies to anyone who is just momentarily free from noticeable — let alone significant — pain or suffering. Any experiential moment that is free from significant suffering is meaningful evidence against highly expansive views of the ontological prevalence of significant suffering.

Our emerging understanding of pain and suffering

Another argument against expansive views of the prevalence of suffering is that our modern understanding of the biology of suffering gives us reason to doubt such views. That is, we have gained an increasingly refined understanding of the evolutionary, genetic, and neurobiological bases of pain and suffering, and the picture that emerges is that suffering is a complex phenomenon associated with specific genes and neural structures (as exemplified by the above-mentioned genetic conditions that knock out pain and/or suffering).

To be sure, the fact that suffering is associated with specific genes and neural structures in animals does not imply that suffering cannot be created in other ways in other systems. It does, however, suggest that suffering is unlikely to be found in simple systems that do not have remote analogues of these specific structures (since we otherwise should expect suffering to be associated with a much wider range structures and processes, not such an intricate and narrowly delineated set).

By analogy, consider the experience of wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers. Do we have reason to believe that fundamental particles such as electrons, or microorganisms such as bacteria, might have such experiences? To go a step further, do we have reason to be agnostic as to whether electrons or bacteria might have such experiences?

These questions may seem too silly to merit contemplation. After all, we know that having a conscious desire to go to a concert for the purpose of online sharing requires rather advanced cognitive abilities that, at least in our case, are associated with extremely complex structures in the brain — not to mention that it requires an understanding of a larger cultural context that is far removed from the everyday concerns of electrons and bacteria. But the question is why we would see the case of suffering as being so different.

Of course, one might object that this is a bad analogy, since the experience described above is far more narrowly specified than is suffering as a general class of experience. I would agree that the experience described above is far more specific and unusual, but I still think the basic point of the analogy holds, in that our understanding is that suffering likewise rests on rather complex and specific structures (when it occurs in animal brains) — we might just not intuitively appreciate how complex and distinctive these structures are in the case of suffering, as opposed to in the Swift experience.

It seems inconsistent to allow ourselves to apply our deeper understanding of the Swift experience to strongly downgrade our credence in electron- or bacteria-level Swift experiences, while not allowing our deeper understanding of pain and suffering to strongly downgrade our credence in electron- or bacteria-level pain and suffering, even if the latter downgrade should be comparatively weaker (given the lower level of specificity of this broader class of experiences).

Practical relevance

It is worth stressing that, in the context of our priorities, the question is not whether we can rule out suffering in simple systems like electrons or bacteria. Rather, the question is whether the all-things-considered probability and weight of such hypothetical suffering is sufficiently large for it to merit any meaningful priority relative to other forms of suffering.

For example, one may hold a lexical view according to which no amount of putative “micro-discomfort” that we might ascribe to electrons or bacteria can ever be collectively worse than a single instance of extreme suffering. Likewise, even if one does not hold a strictly lexical view in theory, one might still hold that the probability of suffering in simple systems is so low that, relative to the expected prevalence of other kinds of suffering, it is so strongly dominated so as to merit practically no priority by comparison (cf. “Lexical priority to extreme suffering — in practice”).

After all, the risk of suffering in simple systems would not only have to be held up against the suffering of all currently existing animals on Earth, but also against the risk of worst-case outcomes that involve astronomical numbers of overtly tormented beings. In this broader perspective, it seems reasonable to believe that the risk of suffering in simple systems is massively dwarfed by the risk of such astronomical worst-case outcomes, partly because the latter risk seems considerably less speculative, and because it seems far more likely to involve the worst instances of suffering.

Relatedly, just as we should be open to considering the possibility of suffering in simple systems such as bacteria, it seems that we should also be open to the possibility that spending a lot of time contemplating this issue — and not least trying to raise concern for it — might be an enormous opportunity cost that will overall increase extreme suffering in the future (e.g. because it distracts people from more important issues, or because it pushes people toward dismissing suffering reducers as absurd or crazy).

To be clear, I am not saying that contemplating this issue in fact is such an opportunity cost. My point is simply that it is important not to treat highly speculative possibilities in a manner that is too one-sided, such that we make one speculative possibility disproportionately salient (e.g. there might be a lot of suffering in microorganisms or in fundamental physics), while neglecting to consider other speculative possibilities that may in some sense “balance out” the former (e.g. that prioritizing the risk of suffering in simple systems significantly increases extreme suffering).

In more general terms, it can be misleading to consider Pascallian wagers if we do not also consider their respective “counter-Pascallian” wagers. For example, what if believing in God actually overall increases the probability of you experiencing eternal suffering, such as by marginally increasing the probability that future people will create infinite universes that contain infinitely many versions of you that get tortured for life?

In this way, our view of Pascal’s wager may change drastically when we go beyond its original one-sided framing and consider a broader range of possibilities, and the same applies to Pascallian wagers relating to the purported suffering of simple entities like bacteria or electrons. When we consider a broader range of speculative hypotheses, it is hardly clear whether we should overall give more or less consideration to such simple entities than we currently do, at least when compared to how much consideration and priority we give to other forms of suffering.

Distrusting salience: Keeping unseen urgencies in mind

The psychological appeal of salient events and risks can be a major hurdle to optimal altruistic priorities and impact. My aim in this post is to outline a few reasons to approach our intuitive fascination with salient events and risks with a fair bit of skepticism, and to actively focus on that which is important yet unseen, hiding in the shadows of the salient.


Contents

  1. General reasons for caution: Availability bias and related biases
  2. The news: A common driver of salience-related distortions
  3. The narrow urgency delusion
  4. Massive problems that always face us: Ongoing moral disasters and future risks
  5. Salience-driven distortions in efforts to reduce s-risks
  6. Reducing salience-driven distortions

The human mind is subject to various biases that involve an overemphasis on the salient, i.e. that which readily stands out and captures our attention.

In general terms, there is the availability bias, also known as the availability heuristic, namely the common tendency to base our beliefs and judgments on information that we can readily recall. For example, we tend to overestimate the frequency of events when examples of these events easily come to mind.

Closely related is what is known as the salience bias, which is the tendency to overestimate salient features and events when making decisions. For instance, when deciding to buy a given product, the salience bias may lead us to give undue importance to a particularly salient feature of that product — e.g. some fancy packaging — while neglecting less salient yet perhaps more relevant features.

A similar bias is the recency bias: our tendency to give disproportionate weight to recent events in our belief-formation and decision-making. This bias is in some sense predicted by the availability bias, since recent events tend to be more readily available to our memory. Indeed, the availability bias and the recency bias are sometimes considered equivalent, even though it seems more accurate to view the recency bias as a consequence or a subset of the availability bias; after all, readily remembered information does not always pertain to recent events.

Finally, there is the phenomenon of belief digitization, which is the tendency to give undue weight to (what we consider) the single most plausible hypothesis in our inferences and decisions, even when other hypotheses also deserve significant weight. For example, if we are considering hypotheses A, B, and C, and we assign them the probabilities 50 percent, 30 percent, and 20 percent, respectively, belief digitization will push us toward simply accepting A as though it were true. In other words, belief digitization pushes us toward altogether discarding B and C, even though B and C collectively have the same probability as A. (See also related studies on Salience Theory and on the overestimation of salient causes and hypotheses in predictive reasoning.)

All of the biases mentioned above can be considered different instances of a broader cluster of availability/salience biases, and they each give us reason to be cautious of the influence that salient information has on our beliefs and our priorities.

One way in which our attention can become preoccupied with salient (though not necessarily crucial) information is through the news. Much has been written against spending a lot of time on the news, and the reasons against it are probably even stronger for those who are trying to spend their time and resources in ways that help sentient beings most effectively.

For even if we grant that there is substantial value in following the news, it seems plausible that the opportunity costs are generally too high, in terms of what one could instead spend one’s limited time learning about or advocating for. Moreover, there is a real risk that a preoccupation with the news has outright harmful effects overall, such as by gradually pulling one’s focus away from the most important problems and toward less important and less neglected problems. After all, the prevailing news criteria or news values decidedly do not reflect the problems that are most important from an impartial perspective concerned with the suffering of all sentient beings.

I believe the same issue exists in academia: A certain issue becomes fashionable, there are calls for abstracts, and there is a strong pull to write and talk about that given issue. And while it may indeed be important to talk and write about those topics for the purpose of getting ahead — or not falling behind — in academia, it seems more doubtful whether such topical talk is at all well-adapted for the purpose of making a difference in the world. In other words, the “news values” of academia are not necessarily much better than the news values of mainstream journalism.

The narrow urgency delusion

A salience-related pitfall that we can easily succumb to when following the news is what we may call the “narrow urgency delusion”. This is when the news covers some specific tragedy and we come to feel, at a visceral level, that this tragedy is the most urgent problem that is currently taking place. Such a perception is, in a very important sense, an illusion.

The reality is that tragedy on an unfathomable scale is always occurring, and the tragedies conveyed by the news are sadly but a tiny fraction of the horrors that are constantly taking place around us. Yet the tragedies that are always occurring, such as children who suffer and die from undernutrition and chickens who are boiled alive, are so common and so underreported that they all too readily fade from our moral perception. To our intuitions, these horrors seemingly register as mere baseline horror — as unsalient abstractions that carry little felt urgency — even though the horrors in question are every bit as urgent as the narrow sliver of salient horrors conveyed in the news (Vinding, 2020, sec. 7.6).

We should thus be clear that the delusion involved in the narrow urgency delusion is not the “urgency” part — there is indeed unspeakable horror and urgency involved in the tragedies reported by the news. The delusion rather lies in the “narrow” part; we find ourselves in a condition that contains extensive horror and torment, all of which merits compassion and concern.

So it is not that the salient victims are less important than what we intuitively feel, but rather that the countless victims whom we effectively overlook are far more important than what we (do not) feel.

Massive problems that always face us: Ongoing moral disasters and future risks

The following are some of the urgent problems that always face us, yet which are often less salient to us than the individual tragedies that are reported in the news:

These common and ever-present problems are, by definition, not news, which hints at the inherent ineffectiveness of news when it comes to giving us a clear picture of the reality we inhabit and the problems that confront us.

As the final entry on the list above suggests, the problems that face us are not limited to ongoing moral disasters. We also face risks of future atrocities, potentially involving horrors on an unprecedented scale. Such risks will plausibly tend to feel even less salient and less urgent than do the ongoing moral disasters we are facing, even though our influence on these future risks — and future suffering in general — could well be more consequential given the vast scope of the long-term future.

So while salience-driven biases may blind us to ongoing large-scale atrocities, they probably blind us even more to future suffering and risks of future atrocities.

Salience-driven distortions in efforts to reduce s-risks

There are many salience-related hurdles that may prevent us from giving significant priority to the reduction of future suffering. Yet even if we do grant a strong priority to the reduction of future suffering, including s-risks in particular, there are reasons to think that salience-driven distortions still pose a serious challenge in our prioritization efforts.

Our general availability bias gives us some reason to believe that we will overemphasize salient ideas and hypotheses in efforts to reduce future suffering. Yet perhaps more compelling are the studies on how we tend to greatly overestimate salient hypotheses when we engage in predictive and multi-stage reasoning in particular. (Multi-stage reasoning is when we make inferences in successive steps, such that the output of one step provides the input for the next one.)

After all, when we are trying to predict the main sources of future suffering, including specific scenarios in which s-risks materialize, we are very much engaging in predictive and multi-stage reasoning. Therefore, we should arguably expect our reasoning about future causes of suffering to be too narrow by default, with a tendency to give too much weight to a relatively small set of salient risks at the expense of a broader class of less salient (yet still significant) risks that we are prone to dismiss in our multi-stage inferences and predictions.

This effect can be further reinforced through other mechanisms. For example, if we have described and explored — or even just imagined — a certain class of risks in greater detail than other risks, then this alone may lead us to regard those more elaborately described risks as being more likely than less elaborately explored scenarios. Moreover, if we find ourselves in a group of people who focus disproportionally on a certain class of future scenarios, this may further increase the salience and perceived likelihood of these scenarios, compared to alternative scenarios that may be more salient in other groups and communities.

Reducing salience-driven distortions

The pitfalls mentioned above seem to suggest some concrete ways in which we might reduce salience-driven distortions in efforts to reduce future suffering.

First, they recommend caution about the danger of neglecting less salient hypotheses when engaging in predictive and multi-stage reasoning. Specifically, when thinking about future risks, we should be careful not to simply focus on what appears to be the single greatest risk, and to effectively neglect all others. After all, even if the risk we regard as the single greatest risk indeed is the single greatest risk, that risk might still be fairly modest compared to the totality of future risks, and we might still do better by deliberately working to reduce a relatively broad class of risks.

Second, the tendency to judge scenarios to be more likely when we have thought about them in detail would seem to recommend that we avoid exploring future risks in starkly unbalanced ways. For instance, if we have explored one class of risks in elaborate detail while largely neglecting another, it seems worth trying to outline concrete scenarios that exemplify the more neglected class of risks, so as to correct any potentially unjustified disregard of their importance and likelihood.

Third, the possibility that certain ideas can become highly salient in part for sociological reasons may recommend a strategy of exchanging ideas with, and actively seeking critiques from, people who do not fully share the outlook that has come to prevail in one’s own group.

In general, it seems that we are likely to underestimate our empirical uncertainty (Vinding, 2020, sec. 9.1-9.2). The space of possible future outcomes is vast, and any specific risk that we may envision is but a tiny subset of the risks we are facing. Hence, our most salient ideas regarding future risks should ideally be held up against a big question mark that represents the many (currently) unsalient risks that confront us.

Put briefly, we need to cultivate a firm awareness of the limited reliability of salience, and a corresponding awareness of the immense importance of the unsalient. We need to make an active effort to keep unseen urgencies in mind.

The catastrophic rise of insect farming and its implications for future efforts to reduce suffering

On the 17th of August 2021, the EU authorized the use of insects as feed for farmed animals such as chickens and pigs. This was a disastrous decision for sentient beings, as it may greatly increase the number of beings who will suffer in animal agriculture. Sadly, this was just one in a series of disastrous decisions that the EU has made regarding insect farming in the last couple of years. Most recently, in February 2022, they authorized the farming of house crickets for human consumption, after having made similar decisions for the farming of mealworms and migratory locusts in 2021.

Many such catastrophic decisions probably lie ahead, seeing that the EU is currently reviewing applications for the farming of nine additional kinds of insects. This brief posts reviews some reflections and potential lessons in light of these harmful legislative decisions.


Contents

  1. Could we have done better?
  2. How can we do better going forward?
    1. Questioning our emotions
    2. Seeing the connection between current institutional developments and s-risks
    3. Proactively searching for other important policy areas and interventions

Could we have done better?

The most relevant aspect to reflect upon in light of these legislative decisions are the strategic implications. Could we have done better? And what could we do better going forward?

I believe the short answer to the first question is a resounding “yes”. I believe that the animal movement could have made far greater efforts to prevent this development (which is not saying much, since the movement at large does not appear to have made much of an effort to prevent this disaster). I am not saying that such efforts would have prevented this development for sure, but I believe that they would have reduced the probability and expected scale of it considerably, to such an extent that it would have been worthwhile to pursue such preventive efforts.

In concrete terms, these efforts could have included:

  • Wealthy donors such as Open Philanthropy making significant donations toward preventing the expansion of insect farming (e.g. investing in research and campaigning work).
  • Animal advocates exploring and developing the many arguments for preventing such an expansion.
  • Animal advocates disseminating these arguments to the broader public, to fellow advocates, and to influential people and groups (e.g. politicians, and policy advisors).

Important to note is that efforts of this kind not only had the potential to change a few significant policy decisions, but they could potentially have prevented — or at least reduced — a whole cascade of harmful policy decisions. Of course, having such an impact might still be possible today, even if it is a lot more difficult at this later stage where the momentum for insect farming already appears strong and growing.

As Abraham Rowe put it, “not working on insect farming over the last decade may come to be one of the largest regrets of the EAA [Effective Animal Activist] community in the near future.” 

How can we do better going forward?

When asking how we can do better, I am particularly interested in what lessons we can draw in our efforts to reduce risks of astronomical future suffering (s-risks). After all, EU’s recent decisions to allow various kinds of insect farming will not only cause enormous amounts of suffering for insects in the near future, but they arguably also increase s-risks to a non-negligible extent, such as by (somewhat) increasing the probability that insects and other small beings will be harmed on an astronomical scale in the future.

So institutional decisions like these seem relevant for our efforts to reduce s-risks, and our failure to prevent these detrimental decisions can plausibly provide relevant lessons for our priorities and strategies going forward. (An implication of these harmful developments that I will not dive into here is that they give us further reason to be pessimistic about the future.)

The following are some of the lessons that tentatively stand out to me.

Questioning our emotions

I suspect that one of the main reasons that insect farming has been neglected by animal advocates is that it fails to appeal to our emotions. A number of factors plausibly contribute to this low level of emotional appeal. For instance, certain biases may prevent proper moral consideration for insects in general, and scope insensitivity may prevent us from appreciating the large number of insects who will suffer due to insect farming. (I strongly doubt that anyone is exempt from these biases, and I suspect that even people who are aware of them might still have neglected insect farming partly as a consequence of these biases.)

Additionally, we may have a bias to focus too narrowly on the suffering that is currently taking place rather than (also) looking ahead to consider how new harmful practices and sources of suffering might emerge, potentially on far greater scales than what we currently see. Reducing the risk of such novel atrocities occurring on a vast scale might not feel as important as does the reduction of existing forms of suffering. Yet the recent rise of insect farming, and the fact that we likely could have done effective work to prevent it, suggest that such feelings are not reliable.

Seeing the connection between current institutional developments and s-risks

When thinking about s-risks, it can be easy to fall victim to excessively abstract thinking and (what I have called) “long-term nebulousness bias” — i.e. a tendency to overlook concrete data and opportunities relevant to long-term influence. In particular, the abstract nature of s-risks may lead us to tacitly believe that good opportunities to influence policy (relative to s-risk reduction) can only be found well into the future, and to perhaps even assume that there is not much of significance that we can do to reduce s-risks at the policy level today

Yet I think the case of insect farming is a counterexample to such beliefs. To be clear, I am not saying that insect farming is necessarily the most promising policy area that we can focus on with respect to s-risk reduction. But it is plausibly a significant one, and one that those trying to reduce s-risks should arguably have paid more attention to in the past. And it still appears to merit greater focus today.

Proactively searching for other important policy areas and interventions

As hinted above, the catastrophic rise of insect farming is in some sense a proof of concept that there are policy decisions in the making that plausibly have a meaningful impact on s-risks. More than that, the case of insect farming might be an example where policy decisions in our time could be fairly pivotal — whether we see a ban on insect farming versus a rapidly unfolding cascade of policy decisions that swiftly expand insect farming might make a big difference, not least because such a cascade could leave us in a position where there is more institutional, financial, and value-related momentum in favor of insect farming (e.g. if massive industries with lobby influence have already emerged around it, and if most people already consider farmed insects an important part of their diet).

This suggests a critical lesson: those working to reduce s-risks have good reason to search for similar, potentially even more influential policy decisions that are being made today or in the near future. By analogy to how animal advocates likely could have made a significant difference (in expectation) if they had campaigned against the expansion of insect farming over the last decade, we may now do well by looking decades ahead, and considering which pivotal policy decisions that we might now be in a good position to influence. The need for such a proactive search effort could be the most important takeaway in light of this recent string of disastrous decisions.

Blog at WordPress.com.

Up ↑