My aim in this essay is to briefly review two plausible biases in relation to our expectations of future AI scenarios. In particular, these are biases that I think risk increasing our estimates of the probability of a local, so-called FOOM takeoff.
An important point to clarify from the outset is that these biases, if indeed real, do not in themselves represent reasons to simply dismiss FOOM scenarios. It would clearly be a mistake to think so. But they do, I submit, constitute reasons to be somewhat more skeptical of them, and to re-examine our beliefs regarding FOOM scenarios. (Stronger, more direct reasons to doubt FOOM have been reviewed elsewhere.)
Egalitarian intuitions looking for upstarts
The first putative bias has its roots in our egalitarian origins. As Christopher Boehm argues in his Hierarchy in the Forrest, we humans evolved in egalitarian tribes in which we created reverse dominance hierarchies to prevent domineering individuals from taking over. Boehm thus suggests that our minds are built to be acutely aware of the potential for any individual to rise and take over, perhaps even to the extent that we have specialized modules whose main task is to be attuned to this risk.
Western “Great Man” intuitions
The second putative bias is much more culturally contingent, and should be expected to be most pronounced in Western (“WEIRD“) minds. As Joe Henrich shows in his book The WEIRDest People in the World, Western minds are uniquely focused on individuals, so much so that their entire way of thinking about the world tends to revolve around individuals and individual properties (as opposed to thinking in terms of collectives and networks, which is more common among East Asian cultures).
The problem is that this Western, individualist mode of thinking, when applied straightforwardly to the dynamics of large-scale societies, is quite wrong. For while it may be mnemonically pragmatic to recount history, including the history of ideas and technology, in terms of individual actions and decisions, the truth is usually far more complex than this individualist narrative lets on. As Henrich argues, innovation is largely the product of large-scale systemic factors (such as the degree of connectedness between people), and these factors are usually far more important than is any individual, suggesting that Westerners tend to strongly overestimate the role that single individuals play in innovation and history more generally. Henrich thus alleges that the Western way of thinking about innovation reflects an “individualism bias” of sorts, and further notes that:
thinking about individuals and focusing on them as having dispositions and kind of always evaluating everybody [in terms of which] attributes they have … leads us to what’s called “the myth of the heroic inventor”, and that’s the idea that the great advances in technology and innovation are the products of individual minds that kind of just burst forth and give us these wonderful inventions. But if you look at the history of innovation, what you’ll find time after time was that there was lucky recombinations, people often invent stuff at the same time, and each individual only makes a small increment to a much larger, longer process.
In other words, innovation is the product of numerous small and piecemeal contributions to a much greater extent than Western “Great Man” storytelling suggests. (Of course, none of this is to say that individuals are unimportant, but merely that Westerners seem likely to vastly overestimate the influence that single individuals have on history and innovation.)
If we have mental modules specialized to look for individuals that accumulate power and take control, and if we have expectations that roughly conform to this pattern in the context of future technology, with one individual entity innovating its way to a takeover, it seems that we should at least wonder whether this expectation may derive partly from our forager-age intuitions rather than resting purely on solid epistemics. Especially when this view of the future seems in strong tension with our actual understanding of innovation. This understanding being that innovation — contra Western intuition — is distributed, with increases in abilities generally the product of countless “small” insights and tools rather than a few big ones.
Both of the tendencies listed above lead us (or in the second case, mostly Westerners) to focus on individual agents rather than larger, systemic issues that may be crucial to future outcomes, yet which are less intuitively appealing for us to focus on. And there may well be more general explanations for this lack of appeal than just the two reasons listed above. The fact that there were no large-scale systemic issues of any kind for almost all of our species’ history renders it unsurprising that we are not particularly prone to focus on such issues (except for local signaling purposes).
Perhaps we need to control for this, and try to look more toward systemic issues than we are intuitively inclined to do. After all, the claim that the future will be dominated by AI systems in some form need not imply that the best way to influence that future is to focus on individual AI systems, as opposed to broader, institutional issues.