A virtue-based approach to reducing suffering given long-term cluelessness

This post is a follow-up to my previous essay on reducing suffering given long-term cluelessness. Long-term cluelessness is the idea that we have no clue which actions are likely to create better or worse consequences across the long-term future. In my previous post, I argued that even if we grant long-term cluelessness (a premise I remain skeptical of), we can still steer by purely consequentialist views that do not entail cluelessness and that can ground a focus on effective suffering reduction.

In this post, I will outline an alternative approach centered on virtues. I argue that even if we reject or find no guidance in any consequentialist view, we can still plausibly adopt a virtue-based approach to reducing suffering, including effective suffering reduction. Such an approach can help guide us independently of consequentialist uncertainty.


Contents

  1. What would a virtue-based approach entail?
  2. Justifications for a virtue-based approach
  3. A virtue-based approach to effective suffering reduction
  4. Conclusion

What would a virtue-based approach entail?

It can be difficult to say exactly what a virtue-based approach to reducing suffering would entail. Indeed, an absence of clear and simple rules, and responding wisely in conditions of ambiguity based on good practical judgment, are all typical features of virtue-based approaches in ethics.

That said, in the broadest terms, a virtue-based approach to suffering involves having morally appropriate attitudes, sentiments, thoughts, and behaviors toward suffering. It involves relating to suffering in the way that a morally virtuous person would relate to it.

Perhaps more straightforwardly, we can say what a virtue-based approach would definitely not involve. For example, it would obviously not involve extreme vices like sadism or cruelty, nor would it involve more common yet still serious vices like being indifferent or passive in the face of suffering.

However, a virtue-based approach would not merely involve the morally unambitious aim of avoiding serious vices. It would usually be much more ambitious than that, encouraging us to aim for moral excellence across all aspects of our character — having deep sympathy and compassion, striving to be proactively helpful, having high integrity, and so on.

In this way, a virtue-based approach may invert an intuitive assumption about the implications of cluelessness. That is, rather than seeing cluelessness as a devastating consideration that potentially opens the floodgates to immoral or insensitive behavior, we can instead see it as paving the way for a focus on moral excellence. After all, if no consequentialist reasons count against a strong focus on moral excellence under assumed cluelessness, then arguably the strongest objections against such a focus fall away. As a result, we might no longer have any plausible reason not to pursue moral excellence in our character and conduct. At a minimum, we would no longer have any convenient consequentialist-framed rationalizations for our vices.

Sure, we could retreat to simply being insensitive and disengaged in the face of suffering — or even retreat to much worse vices — but I will argue that those options are less plausible.

Justifications for a virtue-based approach

There are various possible justifications for the approach outlined above. For example, one justification might be that having excellent moral character simply reflects the kind of person we ideally want to be. For some of us, such a personal desire might in itself be a sufficient reason for adopting a virtue-based approach in some form.

Complementary justifications may derive from our moral intuitions. For instance, all else equal, we might find it intuitive that it is morally preferable to embody excellent moral character than to embody serious vices, or that it is more ethical to display basic moral virtues than to lack such virtues (see also Knutsson, 2023, sec. 7.4). (Note that this differs from the justification above in that we need not personally want to be virtuous in order to have the intuition that it is more ethical to be that way.)

We may also find some justification in contractualist considerations or considerations about what kind of society we would like to live in. For example, we may ideally want to live in a society in which people adhere to virtues of compassion and care for suffering, as well as virtues of effectiveness in reducing suffering (more on this in the next section). Under contractualist-style moral frameworks, favoring such a society would in turn give us moral reason to adhere to those virtues ourselves.

A virtue-based approach might likewise find support if we consider specific cases. For example, imagine that you are a powerful war general whose soldiers are committing heinous atrocities that you have the power to stop — with senseless torture occurring on a large scale that you can halt immediately. And imagine that, given your subjective beliefs, your otherwise favored moral views all fail to give any guidance in this situation (e.g. due to uncertainty about long-term consequences). In contrast, ending the torture would obviously be endorsed by any commonsense virtue-based stance, since that is simply what a virtuous, compassionate person would do regardless of long-term uncertainty. If we agree that ending the torture is the morally right response in a case like this, then this arguably lends some support to such a virtue-based stance (as well as to other moral stances that imply the same response).

In general terms, we may endorse a virtue-based approach partly because it provides an additional moral safety net that we can fall back on when other approaches fail. That is, even if we find it most plausible to rely on other views when these provide practical recommendations, we might still find it reasonable to rely on virtue-based approaches in case those other views fall silent. Having virtue ethics as such a supportive layer can help strengthen our foundation and robustness as moral agents.

(One could also attempt to justify a virtue-based approach by appealing to consequentialist reasoning. Indeed, it could be that promoting a non-consequentialist virtue-based stance would ultimately create better consequences than not doing so. For example, the absence of such a virtue-based stance might increase the risk of extremely harmful behavior among moral agents. However, such arguments would involve premises that are not the focus of this post.)

A virtue-based approach to effective suffering reduction

One might wonder whether a virtue-based approach can ground effective suffering reduction of any kind. That is, can a virtue-based approach ground systematic efforts to reduce suffering effectively with our limited resources? In short, yes. If one deems it virtuous to try to reduce suffering in systematic and effective ways (at least in certain decisions or domains), then a virtue-based approach could provide a moral foundation for such efforts.

For instance, if given a choice between saving 10 versus 1,000 chickens from being boiled alive, we may consider it more virtuous — more compassionate and principled — to save the 1,000, even if we had no idea whether that choice ultimately reduces more suffering across all time or across all consequences that we could potentially assess.

To take a more realistic example: in a choice between donating either to a random charity or to a charity with a strong track record of preventing suffering, we might consider it more virtuous to support the latter, even if we do not know the ultimate consequences.

How would such a virtue-based approach be different from a consequentialist approach? Broadly speaking, there can be two kinds of differences. First, a virtue-based approach might differ from a consequentialist one in terms of its practical implications. For instance, in the donation example above, a virtue-based approach might recommend that we donate to the charity with a track record of suffering prevention even if we are unable to say whether it reduces suffering across all time or across all consequences that we could potentially assess.

Second, even if a virtue-based view had all the same practical implications as some consequentialist view, there would still be a difference in the underlying normative grounding or basis of these respective views. The consequentialist view would be grounded purely in the value of consequences, whereas the virtue-based view would not be grounded purely in that (even if the disvalue of suffering may generally be regarded as the most important consideration). Instead, the virtue-based approach would (also) be grounded at least partly in the kind of person it is morally appropriate to be — the kind of person who embodies a principled and judicious compassion, among other virtues (see e.g. the opening summary in Hursthouse & Pettigrove, 2003).

In short, virtue-based views represent a distinctive way in which some version of effective suffering reduction can be grounded.

Conclusion

There are many possible moral foundations for reducing suffering (see e.g. Vinding, 2020, ch. 6; Knutsson & Vinding, 2024, sec. 2). Even if we find one particular foundation to be most plausible by far, we are not forced to rest absolutely everything on such a singular and potentially brittle basis. Instead, we can adopt many complementary foundations and approaches, including an approach centered on excellent moral character that can guide us when other frameworks might fail. I think that is a wiser approach.

Reducing suffering given long-term cluelessness

An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.”

There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the plausibility of this claim. Rather, my goal is to explore the implications if we assume that we are bound to be clueless about whether any given action overall reduces or increases suffering.

In other words, without taking a position on the conditional premise, what would be the practical implications if such cluelessness were unavoidable? Specifically, would this undermine the project of reducing suffering in effective ways? I will argue not. Even if we grant complete cluelessness and thus grant that certain moral views provide no practical recommendations, we can still reasonably give non-zero weight to other moral views that do provide practical recommendations. Indeed, we can find meaningful practical recommendations even if we hold a purely consequentialist view that is exclusively concerned with reducing suffering.


Contents

  1. A potential approach: Giving weight to scope-adjusted views
  2. Asymmetry in practical recommendations
  3. Toy models
  4. Justifications and motivations
    1. Why give weight to multiple views?
    2. Why give weight to a scope-adjusted view?
  5. Arguments I have not made
  6. Conclusion
  7. Acknowledgments

A potential approach: Giving weight to scope-adjusted views

There might be many ways to ground a reasonable focus on effective suffering reduction even if we assume complete cluelessness about long-term consequences. Here, I will merely outline one candidate option, or class of options, that strikes me as fairly reasonable.

As a way to introduce this approach, say that we fully accept consequentialism in some form (notwithstanding various arguments against being a pure consequentialist, e.g. Knutsson, 2023; Vinding, 2023). Yet despite being fully convinced of consequentialism, we are uncertain or divided about which version of consequentialism is most plausible.

In particular, while we give most weight to forms of consequentialism that entail no restrictions or discounts in its scope, we also give some weight to views that entail a more focused scope. (Note that this kind of approach need not be framed in terms of moral uncertainty, which is just one possible way to frame it. An alternative is to think in terms of degrees of acceptance or levels of agreement with these respective views, cf. Knutsson, 2023, sec. 6.6.)

To illustrate with some specific numbers, say that we give 95 percent credence to consequentialism without scope limitations or adjustments of any kind, and 5 percent credence to some form of scope-adjusted consequentialism. The latter view may be construed such that its scope roughly includes those consequences we can realistically estimate and influence without being clueless. This view is similar to what has been called “reasonable consequentialism”, the view that “an action is morally right if and only if it has the best reasonably expected consequences.” It is also similar to versions of consequentialism that are framed in terms of foreseeable or reasonably foreseeable consequences (Sinnott-Armstrong, 2003, sec. 4).

To be clear, the approach I am exploring here is not committed to any particular scope-adjusted view. The deeper point is simply that we can give non-zero weight to one or more scope-adjusted versions of consequentialism, or to scope-adjusted consequentialist components of a broader moral view. Exploring which scope-adjusted view or views might be most plausible is beyond the aims of this essay, and that question arguably warrants deeper exploration.

That being said, I will mostly focus on views centered on (something like) consequences we can realistically assess and be guided by, since something in this ballpark seems like a relatively plausible candidate for scope-adjustment. I acknowledge that there are significant challenges in clarifying the exact nature of this scope, which is likely to remain an open problem subject to continual refinement. After all, the scope of assessable consequences may grow as our knowledge and predictive power grow.

Asymmetry in practical recommendations

The relevance of the approach outlined above becomes apparent when we evaluate the practical recommendations of the clueless versus non-clueless views incorporated in this approach. A completely clueless consequentialist view would give us no recommendations about how to act, whereas a non-clueless scope-adjusted view would give us practical recommendations. (It would do so by construction if its scope includes those consequences we can realistically estimate and influence without being clueless.)

In other words, the resulting matrix of recommendations from those respective views is that the non-clueless view gives us substantive guidance, while the clueless view suggests no alternative and hence has nothing to add to those recommendations. Thus, if we hold something like the 95/5 combined consequentialist view described above — or indeed any non-zero split between these component views — it seems that we have reason to follow the non-clueless view, all things considered.

Toy models

To give a sense of what a scope-adjusted view might look like, we can consider a toy model with an exponential discount factor and an (otherwise) expected linear increase in population size:

The green area represents 99 percent of the total expected value we can influence under this view, implying that almost all the value we can meaningfully influence is found within the next 700 years.

We can also consider a model with a different discount factor and with cubic growth, reflecting the possibility of space expansion radiating from Earth:

On this model, virtually all the expected value we can meaningfully influence is found within the next 10,000 years. In both of the models above, we end up with a sort of de facto “medium-termism”.

Of course, one can vary the parameters in numerous ways and combine multiple models in ways that reflect more sophisticated views of, for example, expected future populations and discount factors. Views that involve temporal discounting allow for much greater variation than what is captured by the toy models above, including views that focus on much shorter or much longer timescales. Moreover, views that involve discounting need not be limited to temporal discounting in particular, or even be phrased in terms of temporal discounting at all. It is one way to incorporate discounting or scope-adjustments, but by no means the only one. 

Furthermore, if we give some plausibility to views that involve discounting of some kind, we need not be committed to a single view for every single domain. We may hold that the best view, or the view we give the greatest weight, will vary depending on the issue at hand (cf. Dancy, 2001; Knutsson, 2023, sec. 3). A reason for such variability may be that the scope of outcomes we can meaningfully predict often differs significantly across domains. For example, there is a stark difference in the predictability of weather systems versus planetary orbits, and similar differences in predictability might be found across various practical and policy-relevant domains.

Note also that a non-clueless scope-adjusted view need not be rigorously formalized; it could, for example, be phrased in terms of our all things considered assessments, which might be informed by myriad formal models, intuitions, considerations, and so on.

Justifications and motivations

What might justify or motivate the basic approach outlined above? This question can be broken into two sub-questions. First, why give weight to more than just a single moral view? Second, provided we give some weight to more than a single view, why give any weight to a scope-adjusted view concerned with consequences?

Why give weight to multiple views?

Reasons for giving weight to more than a single moral view or theory have been explored elsewhere (see e.g. Dancy, 2001; MacAskill et al., 2020, ch. 1; Knutsson, 2023; Vinding, 2023).

One of the reasons that have been given is that no single moral theory seems able to give satisfying answers to all moral questions (Dancy, 2001; Knutsson, 2023). And even if our preferred moral theory appears to be a plausible candidate for answering all moral questions, it is arguably still appropriate to have less than perfect confidence or acceptance in that theory (MacAskill et al., 2020, ch. 1; Vinding, 2023). Such moderation might be grounded in epistemic modesty and humility, a general skepticism toward fanaticism, and the prudence of diversifying one’s bets. It might also be grounded partly in the observation that other thoughtful people hold different moral views and that there is something to be said in favor of those views.

Likewise, giving exclusive weight to a single moral view might make us practically indifferent or paralyzed, whether it be due to cluelessness or due to underspecification as to what our preferred moral theory implies in some real-world situation. Critically, such practical indifference and paralysis may arise even in the face of the most extreme atrocities. If we find this to be an unreasonable practical implication, we arguably have reason not to give exclusive weight to a moral view that potentially implies such paralysis.

Finally, from a perspective that involves degrees of acceptance or agreement with moral views, a reason for giving weight to multiple views might simply be that those moral views each seem intuitively plausible or that we intuitively agree with them to some extent (cf. Knutsson, 2023, sec. 6.6).

Why give weight to a scope-adjusted view?

What reasons could be given for assigning weight to a scope-adjusted view in particular? One reason may be that it seems reasonable to be concerned with consequences to the extent that we can realistically estimate and be guided by them. That is arguably a sensible and intuitive scope for concern about consequences — or at least it appears sensible to some non-zero degree. If we hold this intuition, even if just to a small degree, it seems reasonable to have a final view in which we give some weight to a view focused on realistically assessable consequences (whatever the scope of those consequences ultimately turns out to be).

Some support may also be found in our moral assessments and stances toward local cases of suffering. For example, if we were confronted with an emergency situation in which some individuals were experiencing intense suffering in our immediate vicinity, and if we were readily able to alleviate this suffering, it would seem morally right to help these beings even if we cannot foresee the long-run consequences. (All theoretical and abstract talk aside, I suspect the vast majority of consequentialists would agree with that position in practice.)

Presumably, at least part of what would make such an intervention morally right is the badness of the suffering that we prevent by intervening. And if we hold that it is morally appropriate to intervene to reduce suffering in cases where we can immediately predict the consequences of doing so — namely that we alleviate the suffering right in front of us — it seems plausible to hold that this stance also generalizes to consequences that are less immediate. In other words, if this stance is sound in cases of immediate suffering prevention — or even if it just has some degree of soundness in such cases — it plausibly also has some degree of soundness when it comes to suffering prevention within a broader range of consequences that we can meaningfully estimate and influence.

This is also in line with the view that we have (at least somewhat) greater moral responsibility toward that which occurs within our local sphere of assessable influence. This view is related to, and may be justified in terms of, the “ought implies can” principle. After all, if we are bound to be clueless and unable to deliberately influence very long-run consequences, then, if we accept some version of the “ought implies can” principle, it seems that we cannot have any moral responsibility or moral duties to deliberately shape those long-run consequences — or at least such moral responsibility is plausibly diminished. In contrast, the “ought implies can” principle is perfectly consistent with moral responsibility within the scope of consequences that we realistically can estimate and deliberately influence in a meaningful way.

Thus, if we give some weight to an “ought implies can” conception of moral responsibility, this would seem to support the idea that we have (at least somewhat) greater moral responsibility toward that which occurs within our sphere of assessable influence. An alternative way to phrase it might be to say that our sphere of assessable influence is a special part of the universe for us, in that we are uniquely positioned to predict and steer events in that part compared to elsewhere, and this arguably gives us a (somewhat) special moral responsibility toward that part of the universe.

Another potential reason to give some weight to views centered on realistically assessable consequences, or more generally to views that entail discounting in some form, is that other sensible people endorse such views based on reasons that seem defensible to some degree. For example, it is common for economists to endorse models that involve temporal discounting, not just in descriptive models but also in prescriptive or normative models (see e.g. Arrow et al., 1996). The justifications for such discounting might be that our level of moral concern should be adjusted for uncertainty about whether there will be any future, uncertainty about our ability to deliberately influence the future, and the possibility that the future will be better able to take care of itself and its problems (relative to earlier problems that we could prioritize instead).

One might object that such reasons for discounting should be incorporated at a purely empirical level, without any discounting at the moral level, and I would largely agree with that sentiment. (Note that when applied at a strictly empirical or practical level, those reasons and adjustments are contenders as to how one might avoid paralysis without any discounting at the moral level.)

Yet even if we think such considerations should mostly or almost exclusively be applied at the empirical level, it might still be defensible to also invoke them to justify some measure of discounting directly at the level of one’s moral view and moral concerns, or at least as a tiny sub-component within one’s broader moral view. In other words, it might be defensible to allow empirical considerations of the kind listed above to inform and influence our fundamental moral values, at least to a small degree.

To be clear, it is not just some selection of economists who endorse normative discounting or scope-adjustment in some form. As noted above, it is also found among those who endorse “reasonable consequentialism” and consequentialism framed in terms of foreseeable consequences. And similar views can be found among people who seek to reduce suffering.

For example, Brian Tomasik has long endorsed a kind of split between reducing suffering effectively in the near term versus reducing suffering effectively across all time. In particular, regarding altruistic efforts and donations, he writes that “splitting is rational if you have more than one utility function”, and he devotes at least 40 percent of his resources toward short-term efforts to reduce suffering (Tomasik, 2015). Jesse Clifton seems to partially endorse a similar approach focused on reasons that we can realistically weigh up — an approach that in his view “probably implies restricting attention to near-term consequences” (see also Clifton, 2025). The views endorsed by Tomasik and Clifton explicitly give some degree of special weight to near-term or realistically assessable consequences, and these views and the judgments underlying them seem fairly defensible.

Lastly, it is worth emphasizing just how weak of a claim we are considering here. In particular, in the framework outlined above, all that is required for the simple practical asymmetry argument to go through is that we give any non-zero weight to a non-clueless view focused on realistically assessable consequences, or some other non-clueless view centered on consequences.

That is, we are not talking about accepting this as the most plausible view, or even as a moderately plausible view. Its role in the practical framework above is more that of a humble tiebreaker — a view that we can consult as an nth-best option if other views fail to give us guidance and if we give this kind of view just the slightest weight. And the totality of reasons listed here arguably justify that we grant it at least a tiny degree of plausibility or acceptance.

Arguments I have not made

One could argue that something akin to the approach outlined here would also be optimal for reducing suffering in expectation across all space and time. In particular, one could argue that such an unrestricted moral aim would in practice imply a focus on realistically assessable consequences. I am open to that argument — after all, it is difficult to see what else the recommended focus could be, to the extent there is one.

For similar reasons, one could argue that a practical focus on realistically assessable consequences represents a uniquely safe and reasonable bet from a consequentialist perspective: it is arguably the most plausible candidate for what a consequentialist view would recommend as a practical focus in any case, whether scope-adjusted or not. Thus, from our position of deep uncertainty — including uncertainty about whether we are bound to be clueless — it arguably makes convergent sense to try to estimate the furthest depths of assessable consequences and to seek to act on those estimates, at least to the extent that we are concerned with consequences.

Yet it is worth being clear that the argument I have made here does not rely on any of these claims or arguments. Indeed, it does not rely on any claims about what is optimal for reducing suffering across all space and time.

As suggested above, the conditional claim I have argued for here is ultimately a very weak one about giving minimal weight to what seems like a fairly moderate and in some ways commonsensical moral view or idea (e.g. it seems fairly commonsensical to be concerned with consequences to the extent that we can realistically estimate and be guided by them). The core argument presented in this essay does not require us to accept any controversial empirical positions.

Conclusion

For some of our problems, perhaps the best we can do is to find “second best solutions” — that is, solutions that do not satisfy all our preferred criteria, yet which are nevertheless better than any other realistic solution. This may also be true when it comes to reducing suffering in a potentially infinite universe. We might be in an unpredictable sea of infinite consequences that ripple outward forever (Schwitzgebel, 2024). But even if we are, this need not prevent us from trying to reduce suffering in effective and sensible ways within a realistic scope. After all, compared to simply giving up on trying to reduce suffering, it seems less arbitrary and more plausible to at least try to reduce suffering within the domain of consequences we can realistically assess and be guided by.

Acknowledgments

Thanks to Tobias Baumann, Jesse Clifton, and Simon Knutsson for helpful comments.

Addressing the Free Will Problem by Reconciling Different Perspectives

First written: Sep. 2024. Last update: Dec. 2025.

I believe that many concerns over free will have to do with problems of reconciling different perspectives. Indeed, I have come to see the reconciliation of different perspectives as the main underlying problem in most concerns and discussions about free will, even if it is rarely recognized as such.


Contents

  1. Contrasting Perspectives
  2. Relevance to Free Will
  3. Different yet Compatible Perspectives
  4. The Core Tension: One vs. Multiple Possibilities
  5. Two Ways to Resolve the Core Tension
    1. The Ontological Resolution: Total vs. Relative Perspectives
    2. The Epistemic Resolution: Uncertainty About Possibilities
  6. Conclusion

Contrasting Perspectives

The following are some of the contrasting perspectives, or modes of being, that seem relevant to discussions of free will:

  • passive vs. active
  • descriptive vs. prescriptive
  • receptive (e.g. purely observing) vs. creative
  • concerned with the actual vs. concerned with the possible

A similar contrast is the one between explanatory versus justificatory reasons and perspectives, such as when we descriptively explain versus normatively justify a given course of action (Alvarez, 2016).

Relevance to Free Will

I see at least three ways in which these contrasting perspectives are relevant to the issue of free will.

First, it seems that many people roughly understand free will as the capacity to adopt and act on the latter perspectives listed above — for example, the capacity to adopt a prescriptive stance that is concerned with realizing some future possibilities over others, and the capacity to take action on that basis. At the very least, these capacities seem to be core components of what many understand by free will (see e.g. Monroe & Malle, 2010; Lam, 2021). To be clear, I am not claiming that this is what everyone understands by free will; the term “free will” is obviously quite ambiguous, and there appears to be substantial variation in how people define it.

Second, in terms of what people take to be the underlying substance of the free will problem, it seems that a key issue for many is whether we can legitimately adopt the latter perspectives above. That is, whether we can legitimately adopt perspectives that are active, prescriptive, creative, and concerned with possibilities, as opposed to only (legitimately) having a passive and actualist perspective.

Third, some thinkers who argue against the existence of free will sometimes seem to speak as though we cannot legitimately adopt these more active and possibility-focused perspectives — as though the passive and actualist perspective is the only legitimate one. To be sure, these thinkers might not hold that view, yet many of their statements can nevertheless easily be interpreted that way, especially by those who see possibility-focused perspectives as being core to “free will” as they understand it.

Different yet Compatible Perspectives

The contrasting perspectives outlined above are surely different, yet they are not in conflict in the sense that we must choose only one of them. Granted, we might not be able to embody the opposing extremes of these perspectives simultaneously, but we can still fruitfully shift between them, and each of these perspectives seems to have their valid uses.

It is also worth noting that the ability to adopt and act on these perspectives can vary in degree. For example, we can develop our capacity to adopt more of a prescriptive stance — e.g. to reflect on our values and to consider the best path going forward. Similarly, we can increase our ability to act from such a values-based stance, thereby increasing our moral agency. Hence, these perspectives and capacities are not simply there or not in some binary sense, and they are not fixed. We can actively cultivate them, and we arguably have good reason to do so.

These points notwithstanding, some may object that there is a fundamental tension to be found in the contrasting perspectives outlined above, and that there are some contrasting perspectives that we cannot legitimately and consistently hold. I will explore this core tension below.

The Core Tension: One vs. Multiple Possibilities

While there are some tensions to be found between each of the general perspectives listed earlier, these tensions are not necessarily so strong and explicit. Where there is a strong tension is when it comes to the following more specific perspectives, or assumptions: (1) assuming that determinism is true, in the sense that there is only one physically possible outcome from any total state of the universe, and (2) assuming that there are multiple possible outcomes that are truly open to us. I believe this is the core tension for many people who are wrestling with the problem of free will.

Some might seek to deflate this tension with the statement that “determinism does not imply fatalism”, meaning that determinism does not imply that we are steered by fate to end up with the same outcome even if we act in different ways. This statement is true, but it does not clearly address the core tension above. If we assume that there is only one total outcome that is physically possible, then this seems inconsistent with at the same time assuming that there are multiple outcomes that are open to us, at least without further clarification. And the latter is an assumption that we seemingly all have to make when we consider and choose between different options — and arguably when we inhabit any broadly “active” perspective.

Of course, one could compartmentalize one’s beliefs and say that we at one level have our purely descriptive and ontological beliefs, while we at another level have our more “active” and decision-related beliefs. If these levels are sufficiently differentiated, one might at the “passive” level believe that there is only one ontologically possible future outcome, yet at the “active” level believe that we have multiple ex-ante possibilities — possibilities that we perceive to be open to us. (These could also be called “epistemic possibilities” or “possibilities in expectation”.)

Something akin to this two-level approach seems common among people who have thought a lot about the subject of free will — both among those who affirm and deny “free will” — even if the two-level approach is only adopted implicitly (see e.g. Harris, 2012, pp. 16, 39; 2013; Dennett, 2014; Tomasik, 2014; Strawson, 2022).

Yet it nevertheless seems rare to see direct and explicit attempts at addressing this core tension — that is, attempts at coherently reconciling a “passive” perspective that may involve one future possibility with an “active” perspective that involves multiple future possibilities. I believe it would be helpful if this core tension were generally addressed more directly by those who discuss the problem of free will.

Two Ways to Resolve the Core Tension

There are at least two ways to resolve the core tension described above: an ontological and an epistemic one. These resolutions are not in conflict — we can consistently endorse both and they are arguably complementary.

The Ontological Resolution: Total vs. Relative Perspectives

The central move of the ontological resolution is to distinguish two levels at which we can talk about possibilities. Broadly speaking, there is the total level, which pertains to the entire universe, and there is the relative level, which pertains to some subset of the universe.

For example, at the relative level, we may place a boundary around a particular agent and thus partition the world in two: the agent and the world external to that agent, where the agent is concerned with possibilities available in the external world.

The agent in question can be construed in many ways. It could be a small subsystem within a brain, or it could be a large group of individuals with shared aims. How exactly we construe the agent is not important here.

The point of the ontological resolution is that the truth about possibilities can differ depending on whether we are talking about the total or the relative level. In particular, it can both be true that there is one possible outcome at the total level and that there are many possibilities that are truly open to the agent at the relative level, in the sense that the external world fully permits those possibilities.

Note that this resolution does not rely on merely epistemic possibilities: the possibilities of the external world are genuine possibilities whose realization depends on what the agent does. Indeed, these possibilities are arguably what our epistemic or ex-ante possibilities track to the degree they are well-calibrated. In this sense, we can have genuine ontological possibilities available to us even if the total universe is fully deterministic — that is, fully determined by the external world plus our choices.

This enables the reconciliation of two seemingly opposed perspectives: the universe can be wholly deterministic while we are nevertheless determining actors who choose among genuine possibilities.

The Epistemic Resolution: Uncertainty About Possibilities

The epistemic approach practically reconciles the following contrasting perspectives:

  • Unitary openness: There is one possible future at the total level, but agents can still choose among genuine possibilities.
  • Plural openness: There are multiple possible futures at the total level, and agents can choose among these possibilities.

Proponents of plural openness might object that unitary openness seems internally inconsistent, despite the distinction between total and relative levels. However, the epistemic resolution does not require us to settle this debate. Instead, it resolves the issue and practically reconciles these perspectives based on our uncertainty. In particular, when it comes to the question of ontological possibilities at the total level, we have reason to assign a non-zero probability both to there being one possible future and to there being multiple possible futures.

This uncertain stance seems to be the most defensible one for the simple reason that we do not know what is true regarding ontological possibilities at the total level, and there are reasons to think that we cannot know (see e.g. Vinding, 2012, ch. 2). There are probably not many who outright deny this uncertainty; the issue is more whether this uncertainty and its potential role in practically reconciling the two views above are explicitly acknowledged.

The uncertain stance is compatible with each of the perspectives outlined above. Proponents of the unitary view may still hold that there is most likely one possible future at the total level, and they would maintain that we can choose among genuine possibilities regardless. Likewise, proponents of the plural view would find no contradiction in this stance of uncertainty: by their lights, the non-zero probability of multiple possible futures at the total level makes it consistent and practically justified to assume that we can choose among genuine possibilities (cf. “Ontological Possibilities and the Meaningfulness of Ethics”).

In short, proponents of these competing views can broadly agree on these key substantive points, including the point that we can legitimately assume genuine possibilities in our decision-making.

Conclusion

Problems relating to free will often appear intractable because we see a strong conflict between different perspectives: the passive view of a cause-and-effect universe and the active view of an agent choosing among multiple possibilities. Yet these perspectives can be reconciled.

Whether we resolve the tension through the ontological distinction between total and relative levels, or through the admission of epistemic uncertainty, the result is the same: we are practically justified in assuming that multiple possibilities are open to us. We need not reject the scientific worldview to legitimate our role as agents. This insight allows us to set aside any paralyzing concern that our choices are somehow illusory and instead focus on using our active and prescriptive capacities to create a better future.

Essays on UFOs and Related Conjectures: Reported Evidence, Theoretical Considerations, and Potential Importance

Essays on UFOs and Related Conjectures invites readers to reflect on their beliefs and intuitions concerning extraterrestrial intelligence. The essays in this collection explore the extraterrestrial UFO hypothesis, optimized futures, and possible motives for a hypothetical extraterrestrial presence around Earth. Some of the essays also delve into the potential moral implications of such a presence. Overall, this collection makes a case for taking the extraterrestrial hypothesis seriously and for further exploring the evidence, theoretical considerations, and moral implications that may relate to this hypothesis.

The essays found in this collection are the following:

The book is available as a free PDF (1st edition; 2nd edition with a new chapter). It is also available for free on Amazon, Smashwords, Apple Books, Barnes & Noble, and elsewhere.


Thoughts on AI pause

Whether to push for an AI pause is a hotly debated question. This post contains some of my thoughts on the issue of AI pause and the discourse that surrounds it.


Contents

  1. The motivation for an AI pause
  2. My thoughts on AI pause, in brief
  3. My thoughts on AI pause discourse
  4. Massive moral urgency: Yes, in both categories of worst-case risks

The motivation for an AI pause

Generally speaking, it seems that the primary motivation behind pushing for an AI pause is that work on AI safety is far from where it needs to be for humanity to maintain control of future AI progress. Therefore, a pause is needed so that work on AI safety — and other related work, such as AI governance — can catch up with the pace of progress in AI capabilities.

My thoughts on AI pause, in brief

Whether it is worth pushing for an AI pause obviously depends on various factors. For one, it depends on the opportunity cost: what could we be doing otherwise? After all, even if one thinks that an AI pause is desirable, one might still have reservations about its tractability compared to other aims. And even if one thinks that an AI pause is both desirable and tractable, there might still be other aims and activities that are even more beneficial (in expectation), such as working on worst-case AI safety (Gloor, 2016; Yudkowsky, 2017; Baumann, 2018), or increasing the priority that people devote to reducing risks of astronomical suffering (s-risks) (Althaus & Gloor, 2016; Baumann 2017; 2022; DiGiovanni, 2021).

Furthermore, there is the question of whether an AI pause would even be beneficial in the first place. This is a complicated question, and I will not explore it in detail here. (For a critical take, see “AI Pause Will Likely Backfire” by Nora Belrose.) Suffice it to say that, in my view, it seems highly uncertain whether any realistic AI pause would be beneficial overall — not just from a suffering-focused perspective, but from the perspective of virtually all impartial value systems. It seems to me that most advocates for AI pause are quite overconfident on this issue.

But to clarify, I am by no means opposed to advocating for an AI pause. It strikes me as something that one can reasonably conclude is helpful and worth doing (depending on one’s values and empirical judgement calls). But my current assessment is just that it is unlikely to be among the best ways to reduce future suffering, mainly because I view the alternative activities outlined above as being more promising, and because I suspect that most realistic AI pauses are unlikely to be clearly beneficial overall.

My thoughts on AI pause discourse

A related critical observation about much of the discourse around AI pause is that it tends toward a simplistic “doom vs. non-doom” dichotomy. That is, the picture that is conveyed seems to be that either humanity loses control of AI and goes extinct, which is bad; or humanity maintains control, which is good. And your probability of the former is your “p(doom)”.

Of course, one may argue that for strategic and communication purposes, it makes sense to simplify things and speak in such dichotomous terms. Yet the problem, in my view, is that this kind of picture is not accurate even to a first approximation. From an altruistic perspective, it is not remotely the case that “loss of control to AI” = “bad”, while “humans maintaining control” = “good”.

For example, if we are concerned with the reduction of s-risks (which is important by the lights of virtually all impartial value systems), we must compare the relative risks of “loss of control to AI” with the risks of “humans maintaining control” — however we define these rough categories. And sadly, it is not the case that “humans maintaining control” is associated with a negligible or trivial risk of worst-case outcomes. Indeed, it is not clear whether “humans maintaining control” is generally associated with better or worse prospects than “loss of control to AI” when it comes to s-risks.

In general, the question of whether a “human-controlled future” is better or worse with respect to reducing future suffering is a difficult one that has been discussed and debated at some length, and no clear consensus has emerged. As a case in point, Brian Tomasik places a 52 percent subjective probability on the claim that “Human-controlled AGI in expectation would result in less suffering than uncontrolled”.

This near-50/50 view stands in stark contrast to what often seems assumed as a core premise in much of the discourse surrounding AI pause, namely that a human-controlled future would obviously be far better (in expectation).

(Some reasons why one might be pessimistic regarding human-controlled futures can be found in the literature on human moral failings; see e.g. Cooper, 2018; Huemer, 2019; Kidd, 2020; Svoboda, 2022. Other reasons include basic competitive aims and dynamics that are likely to be found in a wide range of futures, including human-controlled ones; see e.g. Tomasik, 2013; Knutsson, 2022, sec. 3. See also Vinding, 2022.)

Massive moral urgency: Yes, in both categories of worst-case risks

There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.

At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks. (To be clear, this is not to say that an s-risk outcome is the most likely outcome in any of these two classes of future scenarios, but merely that the current trajectory looks highly suboptimal and concerning with respect to both of them.)

The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks, and as hinted earlier, it seems doubtful to me that pushing for an AI pause is the best way to reduce these risks overall.

From AI to distant probes

The aim of this post is to present a hypothetical future scenario that challenges some of our basic assumptions and intuitions about our place in the cosmos.


Hypothetical future scenario: Earth-descendant probes

Imagine a future scenario in which AI progress continues, and where the ruling powers on Earth eventually send out advanced AI-driven probes to explore other star systems. The ultimate motives of these future Earth rulers may be mysterious and difficult to grasp from our current vantage point, yet we can nevertheless understand that their motives — in this hypothetical scenario — include the exploration of life forms that might have emerged or will emerge elsewhere in the universe. (The fact that there are already projects aimed at sending out (much less advanced) probes to other star systems is arguably some evidence of the plausibility of this future scenario.)

Such exploration may be considered important by these future Earth rulers for a number of reasons, but a prominent reason they consider it important is that it helps inform their broader strategy for the long-term future. By studying the frequency and character of nascent life elsewhere, they can build a better picture of the long-run future of life in the universe. This includes gaining a better picture of where and when these Earth descendants might eventually encounter other species — or probes — that are as advanced as themselves, and not least what these other advanced species might be like in terms of their motives and their propensities toward conflict or cooperation.

The Earth-descendant probes will take an especially strong interest in life forms that are relatively close to matching their own, functionally optimized level of technological development. Why? First of all, they wish to ensure that the ascending civilizations do not come to match their own level of technological sophistication, which the Earth-descendant probes will eventually take steps to prevent so as to not lose their power and influence over the future.

Second, they will study ascending civilizations because what takes place at that late “sub-optimized” stage may be particularly informative for estimating the nature of the fully optimized civilizations that the Earth-descendant probes might encounter in the future (at least the late sub-optimized stage of development seems more informative than do earlier stages of life where comparatively less change happens over time).

From the point of view of these distant life forms, the Earth-descendant probes are almost never visible, and when they occasionally are, they appear altogether mysterious. After all, the probes represent a highly advanced form of technology that the distant life forms do not yet understand, much less master, and the potential motives behind the study protocols of these rarely appearing probes are likewise difficult to make sense of from the outside. Thus, the distant life forms are being studied by the Earth-descendant probes without having any clear sense of their zoo-like condition.

Back to Earth

Now, what is the point of this hypothetical scenario? One point I wish to make is that this is not an absurd or unthinkable scenario. There are, I submit, no fantastical or unbelievable steps involved here, and we can hardly rule out that some version of this scenario could play out in the future. This is obviously not to say that it is the most likely future scenario, but merely that something like this scenario seems fairly plausible provided that technological development continues and eventually expands into space (perhaps around 1 to 10 percent likely?).

But what if we now make just one (theoretically) small change to this scenario such that Earth is no longer the origin of the advanced probes in question, but instead one of the perhaps many planets that are being visited and studied by advanced probes that originated elsewhere in the universe? Essentially, we are changing nothing in the scenario above, except for swapping which exact planet Earth happens to be.

Given the structural equivalence of these respective scenarios, we should hardly consider the swapped scenario to be much less plausible. Sure, we know for a fact that life has arisen on Earth, and hence the projection that Earth-originating life might eventually give rise to advanced probes is not entirely speculative. Yet there is a countervailing consideration that suggests that — conditional on a scenario equivalent to the one described above occurring — Earth is unlikely to be the first planet to give rise to advanced space probes, and is instead more likely to be observed by probes from elsewhere. 

The reason is simply that Earth is but one planet, whereas there are many other planets from which probes could have been sent to study Earth. For example, in a scenario in which a single civilization creates advanced probes that eventually go out and explore, say, a thousand other planets with life at roughly our stage of development (observed at different points in time), we would have a 1 in 1,001 chance of being that first exploring civilization — and a 1,000 in 1,001 chance of being an observed one, under this assumed scenario.

Indeed, even if the exploring civilization in this kind of scenario only ever visits, say, two other planets with life at roughly our stage, we would still be more likely to be among the observed ones than that first observing one (2 in 3 versus 1 in 3). Thus, whatever probability we assign to the hypothetical future scenario in which Earth-descendant space probes observe other life forms at roughly our stage, we should arguably assign a greater probability to a scenario in which we are being observed by similar such probes.

Nevertheless, I think many of us will intuitively think just the opposite, namely that the scenario involving Earth-descendant probes observing others seems far more plausible than the scenario in which we are currently being observed by foreign probes. Indeed, many of us intuitively find the foreign-probes scenario to be quite ridiculous. (That is also largely the attitude that is expressed in leading scholarly books on the Fermi paradox, with scant justification.)

Yet this complete dismissal is difficult to square with the apparent plausibility — or at least the non-ridiculousness — of the “Earth-descendant probes observing others” scenario, as well as the seemingly greater plausibility of the foreign probe scenario compared to the “Earth-descendant probes observing others” scenario. There appears to be a breakdown of the transitivity of plausibility and ridiculousness at the level of our intuitions.

What explains this inconsistency?

I can only speculate on what explains this apparent inconsistency, but I suspect that various biases and cultural factors are part of the explanation.

For example, wishful thinking could well play a role: we may better like a scenario in which Earth’s descendants will be the most advanced species in the universe, compared to a scenario in which we are a relatively late-coming and feeble party without any unique influence over the future. This could in turn cause us to ignore or downplay any considerations that speak against our preferred beliefs. And, of course, apart from our relative feebleness, being observed by an apparently indifferent superpower that does not intervene to prevent even the most gratuitous suffering would seem like bad news as well.

Perhaps more significantly, there is the force of cultural sentiment and social stigma. Most of us have grown up in a culture that openly ridicules the idea of an extraterrestrial presence around Earth. Taking that idea seriously has effectively been just another way of saying that you are a dumb-dumb (or worse), and few of us want to be seen in that way. For the human mind, that is a pressure so strong that it can move continents, and even block mere open-mindedness.

Given the unreasonable effectiveness of such cultural forces in schooling our intuitions, many of us intuitively “just know” in our bones that the idea of an extraterrestrial presence around Earth is ridiculous, with little need to invoke actual cogent reasons.

To be clear, my point here is not that we should positively believe in such a foreign presence, but merely that we may need to revise our intuitive assessment of this possibility, or at least question whether our intuitions and our level of open-mindedness toward this possibility are truly well-grounded.

What might we infer about optimized futures?

It is plausible to assume that technology will keep on advancing along various dimensions until it hits fundamental physical limits. We may refer to futures that involve such maxed-out technological development as “optimized futures”.

My aim in this post is to explore what we might be able to infer about optimized futures. Most of all, my aim is to advance this as an important question that is worth exploring further.


Contents

  1. Optimized futures: End-state technologies in key domains
  2. Why optimized futures are plausible
  3. Why optimized futures are worth exploring
  4. What can we say about optimized futures?
    1. Humanity may be close to (at least some) end-state technologies
    2. Optimized civilizations may be highly interested in near-optimized civilizations
    3. Strong technological convergence across civilizations?
    4. If technology stabilizes at an optimum, what might change?
    5. Information that says something about other optimized civilizations as an extremely coveted resource?
  5. Practical implications?
    1. Prioritizing values and institutions rather than pushing for technological progress?
    2. More research
  6. Conclusion
  7. Acknowledgments

Optimized futures: End-state technologies in key domains

The defining feature of optimized futures is that they entail end-state technologies that cannot be further improved in various key domains. Some examples of these domains include computing power, data storage, speed of travel, maneuverability, materials technology, precision manufacturing, and so on.

Of course, there may be significant tradeoffs between optimization across these respective domains. Likewise, there could be forms of “ultimate optimization” that are only feasible at an impractical cost — say, at extreme energy levels. Yet these complications are not crucial in this context. What I mean by “optimized futures” are futures that involve practically optimal technologies within key domains (such as those listed above).

Why optimized futures are plausible

There are both theoretical and empirical reasons to think that optimized futures are plausible (by which I here mean that they are at least somewhat probable — perhaps more than 10 percent likely).

Theoretically, if the future contains advanced goal-driven agents, we should generally expect those agents to want to achieve their goals in the most efficient ways possible. This in turn predicts continual progress toward ever more efficient technologies, at least as long as such progress is cost-effective.

Empirically, we have an extensive record of goal-oriented agents trying to improve their technology so as to better achieve their aims. Humanity has gone from having virtually no technology to creating a modern society surrounded by advanced technologies of various kinds. And even in our modern age of advanced technology, we still observe persistent incentives and trends toward further improvements in many domains of technology — toward better computers, robots, energy technology, and so on.

It is worth noting that the technological progress we have observed throughout human history has generally not been the product of some overarching collective plan that was deliberately aimed at technological progress. Instead, technological progress has in some sense been more robust than that, since even in the absence of any overarching plan, progress has happened as the result of ordinary demands and desires — for faster computers, faster and safer transportation, cheaper energy, etc.

This robustness is a further reason to think that optimized futures are plausible: even without any overarching plan aimed toward such a future, and even without any individual human necessarily wanting continued technological development leading to an optimized future, we might still be pulled in that direction all the same. And, of course, this point about plausibility applies to more than just humans: it applies to any set of agents who will be — or have been — structuring themselves in a sufficiently similar way so as to allow their everyday demands to push them toward continued technological development.

An objection against the plausibility of optimized futures is that there might be a lot of hidden potential for progress far beyond what our current understanding of physics seems to allow. However, such hidden potential would presumably be discovered eventually, and it seems probable that such hidden potential would likewise be exhausted at some point, even if it may happen later and at more extreme limits than we currently envision. That is, the broad claim that there will ultimately be some fundamental limits to technological development is not predicated on the more narrow claim that our current understanding of those limits is necessarily correct; the broader claim is robust to quite substantial extensions of currently envisioned limits. Indeed, the claim that there will be no fundamental limits to future technological development overall seems a stronger and less empirically grounded claim than does the claim that there will be such limits (cf. Lloyd, 2000; Krauss & Starkman, 2004).

Why optimized futures are worth exploring

The plausibility of optimized futures is one reason to explore them further, and arguably a sufficient reason in itself. Another reason is the scope of such futures: the futures that contain the largest numbers of sentient beings will most likely be optimized futures, suggesting that we have good reason to pay disproportionate attention to such futures, beyond what their degree of plausibility might suggest.

Optimized futures are also worth exploring given that they seem to be a likely point of convergence for many different kinds of technological civilizations. For example, an optimized future seems a plausible outcome of both human-controlled and AI-controlled Earth-originating civilizations, and it likewise seems a plausible outcome of advanced alien civilizations. Thus, a better understanding of optimized futures can potentially apply robustly to many different kinds of future scenarios.

An additional reason it is worth exploring optimized futures is that they overall seem quite neglected, especially given how plausible and consequential such futures appear to be. While some efforts have been made to clarify the physical limits of technology (see e.g. Sandberg, 1999; Lloyd, 2000; Krauss & Starkman, 2004), almost no work has been done on the likely trajectories and motives of civilizations with optimized technology, at least to my knowledge.

Lastly, the assumption of optimized technology is a rather strong constraint that might enable us to say quite a lot about futures that conform to that assumption, suggesting that this could be a fruitful perspective to adopt in our attempts to think about and predict the future.

What can we say about optimized futures?

The question of what we can say about optimized futures is a big one that deserves elaborate analysis. In this section, I will merely raise some preliminary points and speculative reflections.

Humanity may be close to (at least some) end-state technologies

One point that is worth highlighting is that a continuation of current rates of progress seems to imply that humanity could develop end-state technologies in information processing power within a few hundred years, perhaps 250 years at most (if current growth rates persist and assuming that our current understanding of the relevant physics is largely correct).

So at least in this important respect, and under the assumption of continued steady growth, humanity is surprisingly close to reaching an optimized future (cf. Lloyd, 2000).

Optimized civilizations may be highly interested in near-optimized civilizations

Such potential closeness to an optimized future could have significant implications in various ways. For example, if, hypothetically, there exists an older civilization that has already reached a state of optimized technology, any younger civilization that begins to approach optimized technologies within the same cosmic region would likely be of great interest to that older civilization.

One reason it might be of interest is that the optimized technologies of the younger civilization could potentially become competitive with the optimized technologies of the older civilization, and hence the older civilization may see a looming threat in the younger civilization’s advance toward such technologies. After all, since optimized technologies would represent a kind of upper bound of technological development, it is plausible that different instances of such technologies could be competitive with each other regardless of their origins.

Another reason the younger civilization might be of interest is that its trajectory could provide valuable information regarding the likely trajectories and goals of distant optimized civilizations that the older civilization may encounter in the future. (More on this point here.)

Taken together, these considerations suggest that if a given civilization is approaching optimized technology, and if there is an older civilization with optimized technology in its vicinity, this older civilization should take an increasing interest in this younger civilization so as to learn about it before the older civilization might have to permanently halt the development of the younger one.

Strong technological convergence across civilizations?

Another implication of optimized futures is that the technology of advanced civilizations across the universe might be remarkably convergent. Indeed, there are already many examples of convergent evolution in biology on Earth (e.g. eyes and large brains evolving several times independently). Likewise, many cases of convergence are found in cultural evolution in both early history (e.g. the independent emergence of farming, cities, and writing across the globe) as well as in recent history (e.g. independent discoveries in science and mathematics).

Yet the degree of convergence could well be even more pronounced in the case of the end-state technologies of advanced civilizations. After all, this is a case where highly advanced agents are bumping up against the same fundamental constraints, and the optimal engineering solutions in the face of these constraints will likely converge toward the same relatively narrow space of optimal designs — or at least toward the same narrow frontier of optimal designs given potential tradeoffs between different abilities.

In other words, the technologies of advanced civilizations might be far more similar and more firmly dictated by fundamental physical limits than we intuitively expect, especially given that we in our current world are used to seeing continually changing and improving technologies.

If technology stabilizes at an optimum, what might change?

The plausible convergence and stabilization of technological hardware also raises the interesting question of what, if anything, might change and vary in optimized futures.

This question can be understood in at least two distinct ways: what might change or vary across different optimized civilizations, and what might change over time within such civilizations? And note that prevalent change of the one kind need not imply prevalent change of the other kind. For example, it is conceivable that there might be great variation across civilizations, yet virtually no change in goals and values over time within civilizations (cf. “lock-in scenarios”).

Conversely, it is conceivable that goals and values change greatly over time within all optimized civilizations, yet such change could in principle still be convergent across civilizations, such that optimized civilizations tend to undergo roughly the same pattern of changes over time (though such convergence admittedly seems unlikely conditional on there being great changes over time in all optimized civilizations).

If we assume that technological hardware becomes roughly fixed, what might still change and vary — both over time and across different civilizations — includes the following (I am not claiming that this is an exhaustive list):

  • Space expansion: Civilizations might expand into space so as to acquire more resources; and civilizations may differ greatly in terms of how much space they manage to acquire.
  • More or different information: Knowledge may improve or differ over time and space; even if fundamental physics gets solved fairly quickly, there could still be knowledge to gain about, for example, how other civilizations tend to develop.
    • There would presumably also be optimization for information that is useful and actionable. After all, even a technologically optimized probe would still have limited memory, and hence there would be a need to fill this memory with the most relevant information given its tasks and storage capacity.
  • Different algorithms: The way in which information is structured, distributed, and processed might evolve and vary over time and across civilizations (though it is also conceivable that algorithms will ultimately converge toward a relatively narrow space of optima).
  • Different goals and values: As mentioned above, goals and values might change and vary, such as due to internal or external competition, or (perhaps less likely) through processes of reflection.

In other words, even if everyone has — or is — practically the same “iPhone End-State”, what is running on these iPhone End-States, and how many of them there are, may still vary greatly, both across civilizations and over time. And these distinct dimensions of variation could well become the main focus of optimized civilizations, plausibly becoming the main dimensions on which civilizations seek to develop and compete.

Note also that there may be conflicts between improvements along these respective dimensions. For example, perhaps the most aggressive forms of space expansion could undermine the goal of gaining useful information about how other civilizations tend to develop, and hence advanced civilizations might avoid or delay aggressive expansion if the information in question would be sufficiently valuable (cf. the “info gain motive”). Or perhaps aggressive expansion would pose serious risks at the level of a civilization’s internal coordination and control, thereby risking a drift in goals and values.

In general, it seems worth trying to understand what might be the most coveted resources and the most prioritized domains of development for civilizations with optimized technology. 

Information that says something about other optimized civilizations as an extremely coveted resource?

As hinted above, one of the key objectives of a civilization with optimized technology might be to learn, directly or indirectly, about other civilizations that it could encounter in the future. After all, if a civilization manages to both gain control of optimized technology and avoid destructive internal conflicts, the greatest threat to its apex status over time will likely be other civilizations with optimized technology. More generally, the main determinant of an optimized civilization’s success in achieving its goals — whether it can maintain an unrivaled apex status or not — could well be its ability to predict and interact gainfully with other optimized civilizations.

Thus, the most precious resource for any civilization with optimized technology might be information that can prepare this civilization for better exchanges with other optimized agents, whether those exchanges end up being cooperative, competitive, or outright aggressive. In particular, since the technology of optimized civilizations is likely to be highly convergent, the most interesting features to understand about other civilizations might be what kinds of institutions, values, decision procedures, and so on they end up adopting — the kinds of features that seem more contingent.

But again, I should stress that I mention these possibilities as speculative conjectures that seem worth exploring, not as confident predictions.

Practical implications?

In this section, I will briefly speculate on the implications of the prospect of optimized futures. Specifically, what might this prospect imply in terms of how we can best influence the future?

Prioritizing values and institutions rather than pushing for technological progress?

One implication is that there may be limited long-term payoffs in pushing for better technology per se, and that it might make more sense to prioritize the improvement of other factors, such as values and institutions. That is, if the future is in any case likely to be headed toward some technological optimum, and if the values and institutions (etc.) that will run this optimal technology are more contingent and “up for grabs”, then it arguably makes sense to prioritize those more contingent aspects.

To be clear, this is not to say that values and institutions will not also be subject to significant optimization pressures that push them in certain directions, but these pressures will plausibly still be weaker by comparison. After all, a wide range of values will imply a convergent incentive to create optimized technology, yet optimized technology seems compatible with a wide range of values and institutions. And it is not clear that there is a similarly strong pull toward some “optimized” set of values or institutions given optimized technology.

This perspective is arguably also supported by recent history. For example, we have seen technology improve greatly, with computing power heading in a clear upward direction over the past decades. Yet if we look at our values and institutions, it is much less clear whether they have moved in any particular direction over time, let alone an upward direction. Our values and institutions seem to have faced much less of a directional pressure compared to our technology.

More research

Perhaps one of the best things we can do to make better decisions with respect to optimized futures is to do research on such futures. The following are some broad questions that might be worth exploring:

  • What are the likely features and trajectories of optimized futures?
    • Are optimized futures likely to involve conflicts between different optimized civilizations?
    • Other things being equal, is a smaller or a larger number of optimized civilizations generally better for reducing risks of large-scale conflicts?
    • More broadly, is a smaller or larger number of optimized civilizations better for reducing future suffering?
  • What might the likely features and trajectories of optimized futures imply in terms of how we can best influence the future?
  • Are there some values or cooperation mechanisms that would be particularly beneficial to instill in optimized technology?
    • If so, what might they be, and how can we best work to ensure their (eventual) implementation?

Conclusion

The future might in some ways be more predictable than we imagine. I am not claiming to have drawn any clear or significant conclusions about how optimized futures are likely to unfold; I have mostly aired various conjectures. But I do think the question is valuable, and that it may provide a helpful lens for exploring how we can best impact the future.

Acknowledgments

Thanks to Tobias Baumann for helpful comments.

Reasons to doubt that suffering is ontologically prevalent

It is sometimes claimed that we cannot know whether suffering is ontologically prevalent — for example, we cannot rule out that suffering might exist in microorganisms such as bacteria, or even in the simplest physical processes. Relatedly, it has been argued that we cannot trust common-sense views and intuitions regarding the physical basis of suffering.

I agree with the spirit of these arguments, in that I think it is true that we cannot definitively rule out that suffering might exist in bacteria or fundamental physics, and I agree that we have good reasons to doubt common-sense intuitions about the nature of suffering. Nevertheless, I think discussions of expansive views of the ontological prevalence of suffering often present a somewhat unbalanced and, in my view, overly agnostic view of the physical basis of suffering. (By “expansive views”, I do not refer to views that hold that, say, insects are sentient, but rather views that hold that suffering exists in considerably simpler systems, such as in bacteria or fundamental physics.)

While we cannot definitively rule out that suffering might be ontologically prevalent, I do think that we have strong reasons to doubt it, as well as to doubt the practical importance of this possibility. My goal in this post is to present some of these reasons.


Contents

  1. Counterexamples: People who do not experience pain or suffering
  2. Our emerging understanding of pain and suffering
  3. Practical relevance

Counterexamples: People who do not experience pain or suffering

One argument against the notion that suffering is ontologically prevalent is that we seem to have counterexamples in people who do not experience pain or suffering. For example, various genetic conditions seemingly lead to a complete absence of pain and/or suffering. This, I submit, has significant implications for our views of the ontological prevalence (or non-prevalence) of suffering.

After all, the brains of these individuals include countless subatomic particles, basic biological processes, diverse instances of information processing, and so on, suggesting that none of these are in themselves sufficient to generate pain or suffering.

One might object that the brains of such people could be experiencing suffering — perhaps even intense suffering — that these people are just not able to consciously access. Yet even if we were to grant this claim, it does not change the basic argument that generic processes at the level of subatomic particles, basic biology, etc. do not seem sufficient to create suffering. For the processes that these people do consciously access presumably still entail at least some (indeed probably countless) subatomic particles, basic biological processes, electrochemical signals, different types of biological cells, diverse instances of information processing, and so on. This gives us reason to doubt all views that see suffering as an inherent or generic feature of processes at any of these (quite many) respective levels.

Of course, this argument is not limited to people who are congenitally unable to experience suffering; it applies to anyone who is just momentarily free from noticeable — let alone significant — pain or suffering. Any experiential moment that is free from significant suffering is meaningful evidence against highly expansive views of the ontological prevalence of significant suffering.

Our emerging understanding of pain and suffering

Another argument against expansive views of the prevalence of suffering is that our modern understanding of the biology of suffering gives us reason to doubt such views. That is, we have gained an increasingly refined understanding of the evolutionary, genetic, and neurobiological bases of pain and suffering, and the picture that emerges is that suffering is a complex phenomenon associated with specific genes and neural structures (as exemplified by the above-mentioned genetic conditions that knock out pain and/or suffering).

To be sure, the fact that suffering is associated with specific genes and neural structures in animals does not imply that suffering cannot be created in other ways in other systems. It does, however, suggest that suffering is unlikely to be found in simple systems that do not have remote analogues of these specific structures (since we otherwise should expect suffering to be associated with a much wider range structures and processes, not such an intricate and narrowly delineated set).

By analogy, consider the experience of wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers. Do we have reason to believe that fundamental particles such as electrons, or microorganisms such as bacteria, might have such experiences? To go a step further, do we have reason to be agnostic as to whether electrons or bacteria might have such experiences?

These questions may seem too silly to merit contemplation. After all, we know that having a conscious desire to go to a concert for the purpose of online sharing requires rather advanced cognitive abilities that, at least in our case, are associated with extremely complex structures in the brain — not to mention that it requires an understanding of a larger cultural context that is far removed from the everyday concerns of electrons and bacteria. But the question is why we would see the case of suffering as being so different.

Of course, one might object that this is a bad analogy, since the experience described above is far more narrowly specified than is suffering as a general class of experience. I would agree that the experience described above is far more specific and unusual, but I still think the basic point of the analogy holds, in that our understanding is that suffering likewise rests on rather complex and specific structures (when it occurs in animal brains) — we might just not intuitively appreciate how complex and distinctive these structures are in the case of suffering, as opposed to in the Swift experience.

It seems inconsistent to allow ourselves to apply our deeper understanding of the Swift experience to strongly downgrade our credence in electron- or bacteria-level Swift experiences, while not allowing our deeper understanding of pain and suffering to strongly downgrade our credence in electron- or bacteria-level pain and suffering, even if the latter downgrade should be comparatively weaker (given the lower level of specificity of this broader class of experiences).

Practical relevance

It is worth stressing that, in the context of our priorities, the question is not whether we can rule out suffering in simple systems like electrons or bacteria. Rather, the question is whether the all-things-considered probability and weight of such hypothetical suffering is sufficiently large for it to merit any meaningful priority relative to other forms of suffering.

For example, one may hold a lexical view according to which no amount of putative “micro-discomfort” that we might ascribe to electrons or bacteria can ever be collectively worse than a single instance of extreme suffering. Likewise, even if one does not hold a strictly lexical view in theory, one might still hold that the probability of suffering in simple systems is so low that, relative to the expected prevalence of other kinds of suffering, it is so strongly dominated so as to merit practically no priority by comparison (cf. “Lexical priority to extreme suffering — in practice”).

After all, the risk of suffering in simple systems would not only have to be held up against the suffering of all currently existing animals on Earth, but also against the risk of worst-case outcomes that involve astronomical numbers of overtly tormented beings. In this broader perspective, it seems reasonable to believe that the risk of suffering in simple systems is massively dwarfed by the risk of such astronomical worst-case outcomes, partly because the latter risk seems considerably less speculative, and because it seems far more likely to involve the worst instances of suffering.

Relatedly, just as we should be open to considering the possibility of suffering in simple systems such as bacteria, it seems that we should also be open to the possibility that spending a lot of time contemplating this issue — and not least trying to raise concern for it — might be an enormous opportunity cost that will overall increase extreme suffering in the future (e.g. because it distracts people from more important issues, or because it pushes people toward dismissing suffering reducers as absurd or crazy).

To be clear, I am not saying that contemplating this issue in fact is such an opportunity cost. My point is simply that it is important not to treat highly speculative possibilities in a manner that is too one-sided, such that we make one speculative possibility disproportionately salient (e.g. there might be a lot of suffering in microorganisms or in fundamental physics), while neglecting to consider other speculative possibilities that may in some sense “balance out” the former (e.g. that prioritizing the risk of suffering in simple systems significantly increases extreme suffering).

In more general terms, it can be misleading to consider Pascallian wagers if we do not also consider their respective “counter-Pascallian” wagers. For example, what if believing in God actually overall increases the probability of you experiencing eternal suffering, such as by marginally increasing the probability that future people will create infinite universes that contain infinitely many versions of you that get tortured for life?

In this way, our view of Pascal’s wager may change drastically when we go beyond its original one-sided framing and consider a broader range of possibilities, and the same applies to Pascallian wagers relating to the purported suffering of simple entities like bacteria or electrons. When we consider a broader range of speculative hypotheses, it is hardly clear whether we should overall give more or less consideration to such simple entities than we currently do, at least when compared to how much consideration and priority we give to other forms of suffering.

Does digital or “traditional” sentience dominate in expectation?

My aim in this post is to critique two opposite positions that I think are both mistaken, or which at least tend to be endorsed with too much confidence.

The first position is that the vast majority of future sentient beings will, in expectation, be digital, meaning that they will be “implemented” in digital computers.

The second position is in some sense a rejection of the first one. Based on a skepticism of the possibility of digital sentience, this position holds that future sentience will not be artificial, but instead be “traditionally” biological — that is, most future sentient beings will, in expectation, be biological beings roughly as we know them today.

I think the main problem with this dichotomy of positions is that it leaves out a reasonable third option, which is that most future beings will be artificial but not necessarily digital.


Contents

  1. Reasons to doubt that digital sentience dominates in expectation
  2. Reasons to doubt that “traditional” biological sentience dominates in expectation
  3. Why does this matter?

Reasons to doubt that digital sentience dominates in expectation

One can roughly identify two classes of reasons to doubt that most future sentient beings will be digital.

First, there are object-level arguments against the possibility of digital sentience. For example, based on his physicalist view of consciousness, David Pearce argues that the discrete and disconnected bits of a digital computer cannot, if they remain discrete and disconnected, join together into a unified state of sentience. They can at most, Pearce argues, be “micro-experiential pixels”.

Second, regardless of whether one believes in the possibility of digital sentience, the future dominance of digital sentience can be doubted on the grounds that it is a fairly strong and specific claim. After all, even if digital sentience is perfectly possible, it by no means follows that future sentient beings will necessarily converge toward being digital.

In other words, the digital dominance position makes strong assumptions about the most prevalent forms of sentient computation in the future, and it seems that there is a fairly large space of possibilities that does not imply digital dominance, such as (a future predominance of) non-digital neuron-based computers, non-digital neuron-inspired computers, and various kinds of quantum computers that have yet to be invented.

When one takes these arguments into account, it at least seems quite uncertain whether digital sentience dominates in expectation, even if we grant that artificial sentience does.

Reasons to doubt that “traditional” biological sentience dominates in expectation

A reason to doubt that “traditional” sentience dominates is that, whatever one’s theory of sentience, it seems likely that sentience can be created artificially — i.e. in a way that we would deem artificial. (An example might be further developed and engineered versions of brain organoids.) Specifically, regardless of which physical processes or mechanisms we take to be critical to sentience, those processes or mechanisms can most likely be replicated in other systems than just live biological animals as we know them.

If we combine this premise with an assumption of continued technological evolution (which likely holds true in the future scenarios that contain the largest numbers of sentient beings), it overall seems doubtful that the majority of future beings will, in expectation, be “traditional” biological organisms — especially when we consider the prospect of large futures that involve space colonization.

More broadly, we have reason to doubt the “traditional” biological dominance position for the same reason that we have reason to doubt the digital dominance position, namely that the position entails a rather strong and specific claim along the lines that: “this particular class of sentient being is most numerous in expectation”. And, as in the case of digital dominance, it seems that there are many plausible ways in which this could turn out to be wrong, such as due to neuron-inspired or other yet-to-be-invented artificial systems that could become both sentient and prevalent.

Why does this matter?

Whether artificial sentience dominates in expectation plausibly matters for our priorities (though it is unclear how much exactly, since some of our most robust strategies for reducing suffering are probably worth pursuing in roughly the same form regardless). Yet those who take artificial sentience seriously might adopt suboptimal priorities and communication strategies if they primarily focus on digital sentience in particular.

At the level of priorities, they might restrict their focus to an overly narrow set of potentially sentient systems, and perhaps neglect the great majority of future suffering as a result. At the level of communication, they might needlessly hamper their efforts to raise concern for artificial sentience by mostly framing the issue in terms of digital sentience. This framing might lead people who are skeptical of digital sentience to mistakenly dismiss the broader issue of artificial sentience.

Similar points apply to those who believe that “traditional” biological sentience dominates in expectation: they, too, might restrict their focus to an overly narrow set of systems, and thereby neglect to consider a wide range of scenarios that may intuitively seem like science fiction, yet which nevertheless deserve serious consideration on reflection (e.g. scenarios that involve a large-scale spread of suffering due to space colonization).

In summary, there are reasons to doubt both the digital dominance position and the “traditional” biological dominance position. Moreover, it seems that there is something to be gained by not using the narrow term “digital sentience” to refer to the broader category of “artificial sentience”, and by being clear about just how much broader this latter category is.

Controlling for a thinker’s big idea

This post is an attempt to write up what I consider a useful lesson about intellectual discourse. The lesson, in short, is that it is often helpful to control for a thinker’s big idea. That is, a proponent of a big idea may often overstate the plausibility or significance of their big idea, especially if this thinker’s intellectual persona has become strongly tied to that idea.

This is in some sense a trivial lesson, but it is also a lesson that seems to emerge quite consistently when one does research and tries to form a view on virtually any topic. Since I have not seen anyone write about this basic yet important point, I thought it might be worth doing so here (though others have probably written about it somewhere, and awareness of the phenomenon is no doubt widespread among professional researchers).


Contents

  1. Typical patterns of overstatement, overconfidence, and overemphasis
  2. Analogy to sports fans
  3. Controlling for the distorting influence of overconfidence and skewed emphases
  4. Examples of thinkers with big ideas
    1. Kristin Neff and self-compassion
    2. Jonathan Haidt and the social intuitionist model of moral judgment
    3. David Pinsof and hidden status motives
    4. Robin Hanson and grabby aliens
    5. David Pearce and the abolitionist project
    6. Other examples
  5. Concluding note: The deeper point applies to all of us

Typical patterns of overstatement, overconfidence, and overemphasis

The tendency for a thinker to overstate their big idea often takes the following form: in a condition where many different factors contribute to some given effect, a thinker with a big idea can be inclined to highlight one particular factor, and to then confidently present this one factor as though it is the only relevant one, in effect downplaying other plausible factors.

Another example might be when a thinker narrowly advocates their own approach to a particular problem, whereby they quietly neglect other approaches that may be similarly, or even more, helpful.

In many cases, the overstatement mostly takes the form of skewed emphasis and framing rather than explicit claims about the relative importance of different factors or approaches.

Analogy to sports fans

An illustrative analogy might be sports fans who are deeply invested in their favorite team. For example, if a group of football fans argue that their favorite team is objectively the best one ever, we would rightly be skeptical of this assessment. Likewise, if such fans complain that referee calls against their team tend to be deeply unfair, we should hardly be eager to trust them. The sports fans are not impartial judges on these matters.

While we might prefer to think that intellectuals are fundamentally different from dedicated sports fans, it seems that there are nevertheless some significant similarities. For instance, in both cases, identity and reputation tend to be on the line, and unconscious biases often push beliefs in self-serving directions.

Indeed, across many domains of life, we humans frequently act more like sports fans than we would like to admit. Hence, the point here is not that intellectuals are uniquely similar to sports fans, but simply that intellectuals are also — like everyone else — quite like sports fans in some significant respects, such as when they cheer for their own ideas. (An important corollary of this observation is that we usually need to consult the work of many different thinkers if we are to acquire a balanced picture of a given issue — an insight that is, of course, also widely appreciated among professional researchers.)

I should likewise clarify that my point isn’t that scholars with a big idea cannot be right about their big idea; sometimes they are. My point is merely that if a thinker is promoting some big idea that has become tied to their identity and reputation, then we have good reason to be a priori skeptical of this thinker’s own assessment of the idea. (And, of course, this point about a priori skepticism also applies to me, to the extent that I am advancing any particular idea, big or small.)

Controlling for the distorting influence of overconfidence and skewed emphases

Why do people, both scholars and laypeople, often state their views with excessive confidence? Studies suggest that a big part of the reason is that overconfidence quite simply works at persuading others.

Specifically, in studies where individuals can earn money if they convince others that they did well in an intelligence test, participants tend to display overconfidence in order to be more convincing, and this overconfidence in turn makes them significantly more persuasive to their audience. In other words, overconfidence can be an effective tool for influencing and even outright distorting the beliefs of receivers.

These findings suggest that we actively need to control for overconfidence, lest our minds fall for its seductive powers. Similar points apply to communication that emphasizes some ideas while unduly neglecting others. That is, it is not just overconfidence that can distort the beliefs of receivers, but also the undue neglect of alternative views, interpretations, approaches, and so on (cf. the availability heuristic and other salience-related biases).

Examples of thinkers with big ideas

Below, I will briefly list some examples of thinkers who appear, in my view, to overstate or overemphasize one or more big ideas. I should note that I think each of the thinkers mentioned below has made important contributions that are worth studying closely, even if they may at times overstate their big ideas.

Kristin Neff and self-compassion

Kristin Neff places a strong emphasis on self-compassion. In her own words: “I guess you could say that I am a self-compassion evangelist”. And there is indeed a large literature that supports its wide-ranging benefits, from increased self-control to greater wellbeing. Even so, it seems to me that Neff overemphasizes self-compassion relative to other important traits and constructs, such as compassion for others, which is also associated with various benefits. (In contrast to Neff, many psychologists working in the tradition of compassion-focused therapy display a more balanced focus on compassion for both self and others, see e.g. Gilbert et al., 2011; Kirby et al., 2019.)

One might object that Neff specializes in self-compassion and that she cannot be expected to compare self-compassion to other important traits and constructs. That might be a fair objection, but it is also an objection that in some sense grants the core point of this post, namely that we should not expect scholars to provide a balanced assessment of their own big ideas (relative to other ideas and approaches).

Jonathan Haidt and the social intuitionist model of moral judgment

Jonathan Haidt has prominently defended a social intuitionist approach to moral judgment. Simply put, this model says that our moral judgments are almost always dictated by immediate intuitions and then later rationalized by reasons.

Haidt’s model no doubt has a lot of truth to it, as virtually all of his critics seem to concede: our intuitions do play a large role in forming our moral judgments, and the reasons we give to justify our moral judgments are often just post-hoc rationalizations. The problem, however, is that Haidt appears to greatly understate the role that reasons and reasoning can play in moral judgments. That is, there is a lot of evidence suggesting that moral reasoning often does play an important role in people’s moral judgments, and that it frequently plays a larger role than Haidt’s model seems to allow (see e.g. Narvaez, 2008; Paxton & Greene, 2010; Feinberg et al., 2012).

David Pinsof and hidden status motives

David Pinsof emphasizes the hidden status motives underlying human behavior. In a world where people systematically underestimate the influence of status motives, Pinsof’s work seems like a valuable contribution. Yet it also seems like he often goes too far and overstates the role of status motives at the expense of other motives (which admittedly makes for an interesting story about human behavior). Likewise, it appears that Pinsof makes overly strong claims about the need to hide status motives.

In particular, Pinsof argues that drives for status cannot be openly acknowledged, as that would be self-defeating and undermine our status. Why? Because acknowledging our status drives makes us look like mere status-seekers, and mere status-seekers seem selfish, dishonest, and like they have low status. But this seems inaccurate to me, and like it assumes that humans are entirely driven by status motives, while simultaneously needing to seem altogether uninfluenced by status motives. An alternative view is that status motives exert a significant, though not all-powerful, pull on our behavior, and acknowledging this pull need not make us appear selfish, dishonest, or low-status. On the contrary, admitting that we have status drives (as everyone does) may signal a high level of self-awareness and honesty, and it hardly needs to paint us as selfish or low-status (since again, we are simply acknowledging that we possess some basic drives that are shared by everyone).

It is also worth noting that Pinsof seems to contradict himself in this regard, since he himself openly acknowledges his own status drives, and he does not appear to believe that this open acknowledgment is self-defeating or greatly detrimental to his social status, perhaps quite the contrary. Indeed, by openly discussing both his own and others’ hidden status motives, it seems that Pinsof has greatly boosted his social status rather than undermined it.

Robin Hanson and grabby aliens

Robin Hanson has many big ideas, and he seems overconfident about many of them, from futarchy to grabby aliens. To keep this section short, I will focus on his ideas related to grabby aliens, which basically entail that loud and clearly visible aliens explain why we find ourselves at such an early time in the history of the universe, as such aliens would prevent later origin dates.

To be clear, I think Hanson et al.’s grabby aliens model is an important contribution. The model makes some simplifying assumptions, such as dividing aliens into quiet aliens that “don’t expand or change much” and loud aliens that “visibly change the volumes they control”, and Hanson et al. then proceed to explore the implications of these simplifying assumptions, which makes sense. Where things get problematic, however, is when Hanson goes on to make strong statements based on his model, without adding the qualification that his conclusions rely on some strong and highly simplifying assumptions. An example of a strong statement is the claim that loud aliens are “our most robust explanation for why humans have appeared so early in the history of the universe.”

Yet there are many ways in which the simplifying assumptions of the model might be wrong, and which Hanson seems to either ignore or overconfidently dismiss. To mention just two: First, it is conceivable that much later origin dates are impossible, or at least prohibitively improbable, due to certain stellar and planetary conditions becoming highly unfavorable to complex life in the future (cf. Burnetti, 2016; 2017). Since we do not have a good understanding of the conditions necessary for the evolution of complex life, it seems that we ought to place a significant probability on this possibility (while also placing a significant probability on the assumption that the evolution of complex life will remain possible for at least a trillion years).

Second, Hanson et al.’s basic model might be wrong in that expansionist alien civilizations could generally converge to be quiet, in the sense of not being clearly visible; or at least some fraction of expansionist civilizations could be quiet (both possibilities are excluded by Hanson et al.’s model). This is not a minor detail, since if we admit the possibility of such aliens, then our observations do not necessarily give us much evidence about expansionist aliens, and such aliens could even be here already. Likewise, quiet expansionist aliens could be the explanation for early origin dates rather than loud expansionist ones.

When considering such alternative explanations, it becomes clear that the claim that loud aliens explain our seemingly early position in time is just one among many hypotheses, and it is quite debatable whether it is the most plausible or robust one (see also Friederich & Wenmackers, 2023).

David Pearce and the abolitionist project

David Pearce is another thinker who has many big and profound ideas. By far the biggest of these ideas is that we should use biotechnology to abolish suffering throughout the living world, what he calls the abolitionist project. This is an idea that I strongly support in principle. Yet where I would disagree with Pearce, and where it seems to me that he is overconfident, is when it comes to the question of whether pushing for the abolitionist project is the best use of marginal resources for those seeking to reduce suffering.

Specifically, when we consider the risk of worst-case outcomes due to bad values and political dynamics, it seems likely that other aims are more pressing, such as increasing the priority that humanity devotes to the reduction of suffering, as well as improving our institutions such that they are less prone to worst-case outcomes (see also Tomasik, 2016; Vinding, 2020, ch. 13; 2021; 2022). At the very least, it seems that there is considerable uncertainty as to which specific priorities are most helpful for reducing suffering.

Other examples

Some other examples of thinkers who appear to overstate their big ideas include Bryan Caplan and Jason Brennan with their strong statements against democracy (see e.g. Farrell et al., 2022), as well as Paul Bloom when he makes strong claims against the utility of emotional empathy (see e.g. Christov-Moore & Iacoboni, 2014; Ashar et al., 2017; Barish, 2023).

Indeed, Bloom’s widely publicized case against empathy is a good example of how this tendency of overstatement is not confined to just a single individual, as there is also an inclination among publishers and the media to amplify strong and dramatic claims that capture people’s attention. This can serve as yet another force that pushes us toward hearing strong claims and simple narratives, and away from getting sober and accurate perspectives, which are often more complex and nuanced. (For example, contrast Bloom’s case against empathy with the more complex perspective that emerges in Ashar et al., 2017.)

Concluding note: The deeper point applies to all of us

Both for promoters and consumers of ideas, it is worth being wary of the tendency to become unduly attached to any single idea or perspective (i.e. attached based on insufficient reasons or evidence). Such attachment can skew our interpretations and ultimately get in the way of a commitment to form more complete and informed perspectives on important issues.

Blog at WordPress.com.

Up ↑