Our uncertainty about how the future will unfold is vast, especially on long timescales. In light of this uncertainty, it may be natural to think that our uncertainty about strategies must be equally vast and intractable. My aim in this brief post is to argue that this is not the case.
- Analogies to games, competitions, and projects
- Disanalogy in scope?
- Three robust strategies for reducing suffering
- The golden middle way: Avoiding overconfidence and passivity
Analogies to games, competitions, and projects
Perhaps the most intuitive way to see that vast outcome uncertainty need not imply vast strategic uncertainty is to consider games by analogy. Take chess as an example. It allows a staggering number of possible outcomes on the board, and chess players generally have great uncertainty about how a game of chess will unfold, even as they can make some informed predictions (similar to how we can make informed predictions in the real world).
Yet despite the great outcome uncertainty, there are still many strategies and rules of thumb that are robustly beneficial for increasing one’s chances of winning a game of chess. A trivially obvious one is to not lose pieces without good reason, yet seasoned chess players will know a long list of more advanced strategies and heuristics that tend to be beneficial in many different scenarios. (For an example of such a list, see e.g. here.)
Of course, chess is by no means the only example. Across a wide range of board games and video games, the same basic pattern is found: despite vast uncertainty about specific outcomes, there are clear heuristics and strategies that are robustly beneficial.
Indeed, this holds true in virtually any sphere of competition. Politicians cannot predict exactly how an election campaign will unfold, yet they can usually still identify helpful campaign strategies; athletes cannot predict how a given match will develop, yet they can still be reasonably confident about what constitutes good moves and game plans; companies cannot predict market dynamics in detail, yet they can still identify many objectives that would help them beat the competition (e.g. hire the best people and ensure high customer satisfaction).
The point also applies beyond the realm of competition. For instance, when engineers set out to build a big project, there are usually many uncertainties as to how the construction process is going to unfold and what challenges might come up. Yet they are generally still able to identify strategies that can address unforeseen challenges and get the job done. The same goes for just about any project, including cooperative projects between parties with different aims: detailed outcomes are exceedingly difficult to predict, yet it is generally (more) feasible to identify beneficial strategies.
Disanalogy in scope?
One might object that the examples above all involve rather narrow aims, and those aims differ greatly from impartial aims that relate to the interests of all sentient beings. This is a fair point, yet I do not think it undermines these analogies or the core point that they support.
Granted, when we move from narrower to broader aims and endeavors, our uncertainty about the relevant outcomes will tend to increase — e.g. when our aims involve far more beings and far greater spans of time. And when the outcome space and its associated uncertainty increases, we should also expect our strategic uncertainty to become greater. Yet it plausibly still holds true that we can identify at least some reasonably robust strategies, despite the increase in uncertainty that is associated with impartial aims. At the minimum, it seems plausible that our strategic uncertainty is still smaller than our outcome uncertainty.
After all, if such a pattern of lower strategic uncertainty holds true of a wide range of endeavors on a smaller scale, it seems reasonable to expect that it will apply on larger scales too. Besides, it appears that at least some of the examples mentioned in the previous section would still stand even if we greatly increased their scale. For example, in the case of many video games, it seems that we could increase the scale of the game by an arbitrary amount without meaningfully changing the most promising strategies — e.g. accumulate resources, gain more insights, strengthen your position. And similar strategies are plausibly quite robust relative to many goals in the real world as well, on virtually any scale.
Three robust strategies for reducing suffering
If we grant that we can identify some strategies that are robustly beneficial from an impartial perspective, this naturally raises the question as to what these strategies might be. The following are three examples of strategies for reducing suffering that seem especially robust and promising to me. (This is by no means an exhaustive list.)
- Movement and capacity building: Expand the movement of people who strive to reduce suffering, and build a healthy and sustainable culture around this movement. Capacity building also includes efforts to increase the insights and resources available to the movement.
- Promote concern for suffering: Increase the level of priority that people devote to the prevention of suffering, and increase the amount of resources that society devotes to its alleviation.
- Promote cooperation: Increase society’s ability and willingness to engage in cooperative dialogues and positive-sum compromises that can help steer us away from bad outcomes.
The golden middle way: Avoiding overconfidence and passivity
To be clear, I do not mean to invite complacency about the risk that some apparently promising strategies could prove harmful. But I think it is worth keeping in mind that, just as there are costs associated with overconfidence, there are also costs associated with being too uncertain and too hesitant to act on the strategies that seem most promising. All in all, I think we have good reasons to pursue strategies such as those listed above, while still keeping in mind that we do face great strategic uncertainty.