From AI to distant probes

The aim of this post is to present a hypothetical future scenario that challenges some of our basic assumptions and intuitions about our place in the cosmos.


Hypothetical future scenario: Earth-descendant probes

Imagine a future scenario in which AI progress continues, and where the ruling powers on Earth eventually send out advanced AI-driven probes to explore other star systems. The ultimate motives of these future Earth rulers may be mysterious and difficult to grasp from our current vantage point, yet we can nevertheless understand that their motives — in this hypothetical scenario — include the exploration of life forms that might have emerged or will emerge elsewhere in the universe. (The fact that there are already projects aimed at sending out (much less advanced) probes to other star systems is arguably some evidence of the plausibility of this future scenario.)

Such exploration may be considered important by these future Earth rulers for a number of reasons, but a prominent reason they consider it important is that it helps inform their broader strategy for the long-term future. By studying the frequency and character of nascent life elsewhere, they can build a better picture of the long-run future of life in the universe. This includes gaining a better picture of where and when these Earth descendants might eventually encounter other species — or probes — that are as advanced as themselves, and not least what these other advanced species might be like in terms of their motives and their propensities toward conflict or cooperation.

The Earth-descendant probes will take an especially strong interest in life forms that are relatively close to matching their own, functionally optimized level of technological development. Why? First of all, they wish to ensure that the ascending civilizations do not come to match their own level of technological sophistication, which the Earth-descendant probes will eventually take steps to prevent so as to not lose their power and influence over the future.

Second, they will study ascending civilizations because what takes place at that late “sub-optimized” stage may be particularly informative for estimating the nature of the fully optimized civilizations that the Earth-descendant probes might encounter in the future (at least the late sub-optimized stage of development seems more informative than do earlier stages of life where comparatively less change happens over time).

From the point of view of these distant life forms, the Earth-descendant probes are almost never visible, and when they occasionally are, they appear altogether mysterious. After all, the probes represent a highly advanced form of technology that the distant life forms do not yet understand, much less master, and the potential motives behind the study protocols of these rarely appearing probes are likewise difficult to make sense of from the outside. Thus, the distant life forms are being studied by the Earth-descendant probes without having any clear sense of their zoo-like condition.

Back to Earth

Now, what is the point of this hypothetical scenario? One point I wish to make is that this is not an absurd or unthinkable scenario. There are, I submit, no fantastical or unbelievable steps involved here, and we can hardly rule out that some version of this scenario could play out in the future. This is obviously not to say that it is the most likely future scenario, but merely that something like this scenario seems fairly plausible provided that technological development continues and eventually expands into space (perhaps around 1 to 10 percent likely?).

But what if we now make just one (theoretically) small change to this scenario such that Earth is no longer the origin of the advanced probes in question, but instead one of the perhaps many planets that are being visited and studied by advanced probes that originated elsewhere in the universe? Essentially, we are changing nothing in the scenario above, except for swapping which exact planet Earth happens to be.

Given the structural equivalence of these respective scenarios, we should hardly consider the swapped scenario to be much less plausible. Sure, we know for a fact that life has arisen on Earth, and hence the projection that Earth-originating life might eventually give rise to advanced probes is not entirely speculative. Yet there is a countervailing consideration that suggests that — conditional on a scenario equivalent to the one described above occurring — Earth is unlikely to be the first planet to give rise to advanced space probes, and is instead more likely to be observed by probes from elsewhere. 

The reason is simply that Earth is but one planet, whereas there are many other planets from which probes could have been sent to study Earth. For example, in a scenario in which a single civilization creates advanced probes that eventually go out and explore, say, a thousand other planets with life at roughly our stage of development (observed at different points in time), we would have a 1 in 1,001 chance of being that first exploring civilization — and a 1,000 in 1,001 chance of being an observed one, under this assumed scenario.

Indeed, even if the exploring civilization in this kind of scenario only ever visits, say, two other planets with life at roughly our stage, we would still be more likely to be among the observed ones than that first observing one (2 in 3 versus 1 in 3). Thus, whatever probability we assign to the hypothetical future scenario in which Earth-descendant space probes observe other life forms at roughly our stage, we should arguably assign a greater probability to a scenario in which we are being observed by similar such probes.

Nevertheless, I think many of us will intuitively think just the opposite, namely that the scenario involving Earth-descendant probes observing others seems far more plausible than the scenario in which we are currently being observed by foreign probes. Indeed, many of us intuitively find the foreign-probes scenario to be quite ridiculous. (That is also largely the attitude that is expressed in leading scholarly books on the Fermi paradox, with scant justification.)

Yet this complete dismissal is difficult to square with the apparent plausibility — or at least the non-ridiculousness — of the “Earth-descendant probes observing others” scenario, as well as the seemingly greater plausibility of the foreign probe scenario compared to the “Earth-descendant probes observing others” scenario. There appears to be a breakdown of the transitivity of plausibility and ridiculousness at the level of our intuitions.

What explains this inconsistency?

I can only speculate on what explains this apparent inconsistency, but I suspect that various biases and cultural factors are part of the explanation.

For example, wishful thinking could well play a role: we may better like a scenario in which Earth’s descendants will be the most advanced species in the universe, compared to a scenario in which we are a relatively late-coming and feeble party without any unique influence over the future. This could in turn cause us to ignore or downplay any considerations that speak against our preferred beliefs. And, of course, apart from our relative feebleness, being observed by an apparently indifferent superpower that does not intervene to prevent even the most gratuitous suffering would seem like bad news as well.

Perhaps more significantly, there is the force of cultural sentiment and social stigma. Most of us have grown up in a culture that openly ridicules the idea of an extraterrestrial presence around Earth. Taking that idea seriously has effectively been just another way of saying that you are a dumb-dumb (or worse), and few of us want to be seen in that way. For the human mind, that is a pressure so strong that it can move continents, and even block mere open-mindedness.

Given the unreasonable effectiveness of such cultural forces in schooling our intuitions, many of us intuitively “just know” in our bones that the idea of an extraterrestrial presence around Earth is ridiculous, with little need to invoke actual cogent reasons.

To be clear, my point here is not that we should positively believe in such a foreign presence, but merely that we may need to revise our intuitive assessment of this possibility, or at least question whether our intuitions and our level of open-mindedness toward this possibility are truly well-grounded.

What might we infer about optimized futures?

It is plausible to assume that technology will keep on advancing along various dimensions until it hits fundamental physical limits. We may refer to futures that involve such maxed-out technological development as “optimized futures”.

My aim in this post is to explore what we might be able to infer about optimized futures. Most of all, my aim is to advance this as an important question that is worth exploring further.


Contents

  1. Optimized futures: End-state technologies in key domains
  2. Why optimized futures are plausible
  3. Why optimized futures are worth exploring
  4. What can we say about optimized futures?
    1. Humanity may be close to (at least some) end-state technologies
    2. Optimized civilizations may be highly interested in near-optimized civilizations
    3. Strong technological convergence across civilizations?
    4. If technology stabilizes at an optimum, what might change?
    5. Information that says something about other optimized civilizations as an extremely coveted resource?
  5. Practical implications?
    1. Prioritizing values and institutions rather than pushing for technological progress?
    2. More research
  6. Conclusion
  7. Acknowledgments

Optimized futures: End-state technologies in key domains

The defining feature of optimized futures is that they entail end-state technologies that cannot be further improved in various key domains. Some examples of these domains include computing power, data storage, speed of travel, maneuverability, materials technology, precision manufacturing, and so on.

Of course, there may be significant tradeoffs between optimization across these respective domains. Likewise, there could be forms of “ultimate optimization” that are only feasible at an impractical cost — say, at extreme energy levels. Yet these complications are not crucial in this context. What I mean by “optimized futures” are futures that involve practically optimal technologies within key domains (such as those listed above).

Why optimized futures are plausible

There are both theoretical and empirical reasons to think that optimized futures are plausible (by which I here mean that they are at least somewhat probable — perhaps more than 10 percent likely).

Theoretically, if the future contains advanced goal-driven agents, we should generally expect those agents to want to achieve their goals in the most efficient ways possible. This in turn predicts continual progress toward ever more efficient technologies, at least as long as such progress is cost-effective.

Empirically, we have an extensive record of goal-oriented agents trying to improve their technology so as to better achieve their aims. Humanity has gone from having virtually no technology to creating a modern society surrounded by advanced technologies of various kinds. And even in our modern age of advanced technology, we still observe persistent incentives and trends toward further improvements in many domains of technology — toward better computers, robots, energy technology, and so on.

It is worth noting that the technological progress we have observed throughout human history has generally not been the product of some overarching collective plan that was deliberately aimed at technological progress. Instead, technological progress has in some sense been more robust than that, since even in the absence of any overarching plan, progress has happened as the result of ordinary demands and desires — for faster computers, faster and safer transportation, cheaper energy, etc.

This robustness is a further reason to think that optimized futures are plausible: even without any overarching plan aimed toward such a future, and even without any individual human necessarily wanting continued technological development leading to an optimized future, we might still be pulled in that direction all the same. And, of course, this point about plausibility applies to more than just humans: it applies to any set of agents who will be — or have been — structuring themselves in a sufficiently similar way so as to allow their everyday demands to push them toward continued technological development.

An objection against the plausibility of optimized futures is that there might be a lot of hidden potential for progress far beyond what our current understanding of physics seems to allow. However, such hidden potential would presumably be discovered eventually, and it seems probable that such hidden potential would likewise be exhausted at some point, even if it may happen later and at more extreme limits than we currently envision. That is, the broad claim that there will ultimately be some fundamental limits to technological development is not predicated on the more narrow claim that our current understanding of those limits is necessarily correct; the broader claim is robust to quite substantial extensions of currently envisioned limits. Indeed, the claim that there will be no fundamental limits to future technological development overall seems a stronger and less empirically grounded claim than does the claim that there will be such limits (cf. Lloyd, 2000; Krauss & Starkman, 2004).

Why optimized futures are worth exploring

The plausibility of optimized futures is one reason to explore them further, and arguably a sufficient reason in itself. Another reason is the scope of such futures: the futures that contain the largest numbers of sentient beings will most likely be optimized futures, suggesting that we have good reason to pay disproportionate attention to such futures, beyond what their degree of plausibility might suggest.

Optimized futures are also worth exploring given that they seem to be a likely point of convergence for many different kinds of technological civilizations. For example, an optimized future seems a plausible outcome of both human-controlled and AI-controlled Earth-originating civilizations, and it likewise seems a plausible outcome of advanced alien civilizations. Thus, a better understanding of optimized futures can potentially apply robustly to many different kinds of future scenarios.

An additional reason it is worth exploring optimized futures is that they overall seem quite neglected, especially given how plausible and consequential such futures appear to be. While some efforts have been made to clarify the physical limits of technology (see e.g. Sandberg, 1999; Lloyd, 2000; Krauss & Starkman, 2004), almost no work has been done on the likely trajectories and motives of civilizations with optimized technology, at least to my knowledge.

Lastly, the assumption of optimized technology is a rather strong constraint that might enable us to say quite a lot about futures that conform to that assumption, suggesting that this could be a fruitful perspective to adopt in our attempts to think about and predict the future.

What can we say about optimized futures?

The question of what we can say about optimized futures is a big one that deserves elaborate analysis. In this section, I will merely raise some preliminary points and speculative reflections.

Humanity may be close to (at least some) end-state technologies

One point that is worth highlighting is that a continuation of current rates of progress seems to imply that humanity could develop end-state technologies in information processing power within a few hundred years, perhaps 250 years at most (if current growth rates persist and assuming that our current understanding of the relevant physics is largely correct).

So at least in this important respect, and under the assumption of continued steady growth, humanity is surprisingly close to reaching an optimized future (cf. Lloyd, 2000).

Optimized civilizations may be highly interested in near-optimized civilizations

Such potential closeness to an optimized future could have significant implications in various ways. For example, if, hypothetically, there exists an older civilization that has already reached a state of optimized technology, any younger civilization that begins to approach optimized technologies within the same cosmic region would likely be of great interest to that older civilization.

One reason it might be of interest is that the optimized technologies of the younger civilization could potentially become competitive with the optimized technologies of the older civilization, and hence the older civilization may see a looming threat in the younger civilization’s advance toward such technologies. After all, since optimized technologies would represent a kind of upper bound of technological development, it is plausible that different instances of such technologies could be competitive with each other regardless of their origins.

Another reason the younger civilization might be of interest is that its trajectory could provide valuable information regarding the likely trajectories and goals of distant optimized civilizations that the older civilization may encounter in the future. (More on this point here.)

Taken together, these considerations suggest that if a given civilization is approaching optimized technology, and if there is an older civilization with optimized technology in its vicinity, this older civilization should take an increasing interest in this younger civilization so as to learn about it before the older civilization might have to permanently halt the development of the younger one.

Strong technological convergence across civilizations?

Another implication of optimized futures is that the technology of advanced civilizations across the universe might be remarkably convergent. Indeed, there are already many examples of convergent evolution in biology on Earth (e.g. eyes and large brains evolving several times independently). Likewise, many cases of convergence are found in cultural evolution in both early history (e.g. the independent emergence of farming, cities, and writing across the globe) as well as in recent history (e.g. independent discoveries in science and mathematics).

Yet the degree of convergence could well be even more pronounced in the case of the end-state technologies of advanced civilizations. After all, this is a case where highly advanced agents are bumping up against the same fundamental constraints, and the optimal engineering solutions in the face of these constraints will likely converge toward the same relatively narrow space of optimal designs — or at least toward the same narrow frontier of optimal designs given potential tradeoffs between different abilities.

In other words, the technologies of advanced civilizations might be far more similar and more firmly dictated by fundamental physical limits than we intuitively expect, especially given that we in our current world are used to seeing continually changing and improving technologies.

If technology stabilizes at an optimum, what might change?

The plausible convergence and stabilization of technological hardware also raises the interesting question of what, if anything, might change and vary in optimized futures.

This question can be understood in at least two distinct ways: what might change or vary across different optimized civilizations, and what might change over time within such civilizations? And note that prevalent change of the one kind need not imply prevalent change of the other kind. For example, it is conceivable that there might be great variation across civilizations, yet virtually no change in goals and values over time within civilizations (cf. “lock-in scenarios”).

Conversely, it is conceivable that goals and values change greatly over time within all optimized civilizations, yet such change could in principle still be convergent across civilizations, such that optimized civilizations tend to undergo roughly the same pattern of changes over time (though such convergence admittedly seems unlikely conditional on there being great changes over time in all optimized civilizations).

If we assume that technological hardware becomes roughly fixed, what might still change and vary — both over time and across different civilizations — includes the following (I am not claiming that this is an exhaustive list):

  • Space expansion: Civilizations might expand into space so as to acquire more resources; and civilizations may differ greatly in terms of how much space they manage to acquire.
  • More or different information: Knowledge may improve or differ over time and space; even if fundamental physics gets solved fairly quickly, there could still be knowledge to gain about, for example, how other civilizations tend to develop.
    • There would presumably also be optimization for information that is useful and actionable. After all, even a technologically optimized probe would still have limited memory, and hence there would be a need to fill this memory with the most relevant information given its tasks and storage capacity.
  • Different algorithms: The way in which information is structured, distributed, and processed might evolve and vary over time and across civilizations (though it is also conceivable that algorithms will ultimately converge toward a relatively narrow space of optima).
  • Different goals and values: As mentioned above, goals and values might change and vary, such as due to internal or external competition, or (perhaps less likely) through processes of reflection.

In other words, even if everyone has — or is — practically the same “iPhone End-State”, what is running on these iPhone End-States, and how many of them there are, may still vary greatly, both across civilizations and over time. And these distinct dimensions of variation could well become the main focus of optimized civilizations, plausibly becoming the main dimensions on which civilizations seek to develop and compete.

Note also that there may be conflicts between improvements along these respective dimensions. For example, perhaps the most aggressive forms of space expansion could undermine the goal of gaining useful information about how other civilizations tend to develop, and hence advanced civilizations might avoid or delay aggressive expansion if the information in question would be sufficiently valuable (cf. the “info gain motive”). Or perhaps aggressive expansion would pose serious risks at the level of a civilization’s internal coordination and control, thereby risking a drift in goals and values.

In general, it seems worth trying to understand what might be the most coveted resources and the most prioritized domains of development for civilizations with optimized technology. 

Information that says something about other optimized civilizations as an extremely coveted resource?

As hinted above, one of the key objectives of a civilization with optimized technology might be to learn, directly or indirectly, about other civilizations that it could encounter in the future. After all, if a civilization manages to both gain control of optimized technology and avoid destructive internal conflicts, the greatest threat to its apex status over time will likely be other civilizations with optimized technology. More generally, the main determinant of an optimized civilization’s success in achieving its goals — whether it can maintain an unrivaled apex status or not — could well be its ability to predict and interact gainfully with other optimized civilizations.

Thus, the most precious resource for any civilization with optimized technology might be information that can prepare this civilization for better exchanges with other optimized agents, whether those exchanges end up being cooperative, competitive, or outright aggressive. In particular, since the technology of optimized civilizations is likely to be highly convergent, the most interesting features to understand about other civilizations might be what kinds of institutions, values, decision procedures, and so on they end up adopting — the kinds of features that seem more contingent.

But again, I should stress that I mention these possibilities as speculative conjectures that seem worth exploring, not as confident predictions.

Practical implications?

In this section, I will briefly speculate on the implications of the prospect of optimized futures. Specifically, what might this prospect imply in terms of how we can best influence the future?

Prioritizing values and institutions rather than pushing for technological progress?

One implication is that there may be limited long-term payoffs in pushing for better technology per se, and that it might make more sense to prioritize the improvement of other factors, such as values and institutions. That is, if the future is in any case likely to be headed toward some technological optimum, and if the values and institutions (etc.) that will run this optimal technology are more contingent and “up for grabs”, then it arguably makes sense to prioritize those more contingent aspects.

To be clear, this is not to say that values and institutions will not also be subject to significant optimization pressures that push them in certain directions, but these pressures will plausibly still be weaker by comparison. After all, a wide range of values will imply a convergent incentive to create optimized technology, yet optimized technology seems compatible with a wide range of values and institutions. And it is not clear that there is a similarly strong pull toward some “optimized” set of values or institutions given optimized technology.

This perspective is arguably also supported by recent history. For example, we have seen technology improve greatly, with computing power heading in a clear upward direction over the past decades. Yet if we look at our values and institutions, it is much less clear whether they have moved in any particular direction over time, let alone an upward direction. Our values and institutions seem to have faced much less of a directional pressure compared to our technology.

More research

Perhaps one of the best things we can do to make better decisions with respect to optimized futures is to do research on such futures. The following are some broad questions that might be worth exploring:

  • What are the likely features and trajectories of optimized futures?
    • Are optimized futures likely to involve conflicts between different optimized civilizations?
    • Other things being equal, is a smaller or a larger number of optimized civilizations generally better for reducing risks of large-scale conflicts?
    • More broadly, is a smaller or larger number of optimized civilizations better for reducing future suffering?
  • What might the likely features and trajectories of optimized futures imply in terms of how we can best influence the future?
  • Are there some values or cooperation mechanisms that would be particularly beneficial to instill in optimized technology?
    • If so, what might they be, and how can we best work to ensure their (eventual) implementation?

Conclusion

The future might in some ways be more predictable than we imagine. I am not claiming to have drawn any clear or significant conclusions about how optimized futures are likely to unfold; I have mostly aired various conjectures. But I do think the question is valuable, and that it may provide a helpful lens for exploring how we can best impact the future.

Acknowledgments

Thanks to Tobias Baumann for helpful comments.

Beware underestimating the probability of very bad outcomes: Historical examples against future optimism

It may be tempting to view history through a progressive lens that sees humanity as climbing toward ever greater moral progress and wisdom. As the famous quote popularized by Martin Luther King Jr. goes: “The arc of the moral universe is long, but it bends toward justice.”

Yet while we may hope that this is true, and do our best to increase the probability that it will be, we should also keep in mind that there are reasons to doubt this optimistic narrative. For some, the recent rise of right-wing populism is a salient reason to be less confident about humanity’s supposed path toward ever more compassionate and universal values. But it seems that we find even stronger reasons to be skeptical if we look further back in history. My aim in this post is to present a few historical examples that in my view speak against confident optimism regarding humanity’s future.


Contents

  1. Germany in year 1900
  2. Shantideva around year 700
  3. Lewis Gompertz and J. Howard Moore in the 19th century

Germany in year 1900

In 1900, Germany was far from being a paragon of moral advancement. They were a colonial power, antisemitism was widespread, and bigoted anti-Polish Germanisation policies were in effect. Yet Germany anno 1900 was nevertheless far from being like Germany anno 1939-1945, in which it was the main aggressor in the deadliest war in history and the perpetrator of the largest genocide in history.

In other words, Germany had undergone an extreme case of moral regress along various dimensions by 1942 (the year the so-called Final Solution was formulated and approved by the Nazi leadership) compared to 1900. And this development was not easy to predict in advance. Indeed, for historian of antisemitism Shulamit Volkov, a key question regarding the Holocaust is: “Why was it so hard to see the approaching disaster?”

If one had told the average German citizen in 1900 about the atrocities that their country would perpetrate four decades later, would they have believed it? What probability would they have assigned to the possibility that their country would commit atrocities on such a massive scale? I suspect it would be very low. They might not have seen more reason to expect such moral regress than we do today when we think of our future.

A lesson that we can draw from Germany’s past moral deterioration is, to paraphrase Volkov’s question, that approaching disasters can be hard to see in advance. And this lesson suggests that we should not be too confident as to whether we ourselves might currently be headed toward disasters that are difficult to see in advance.

Shantideva around year 700

Shantideva was a Buddhist monk who lived in ca. 685-763. He is best known as the author of A Guide to the Bodhisattva’s Way of Life, which is a remarkable text for its time. The core message is one of profound compassion for all sentient beings, and Shantideva not only describes such universally compassionate ideals, but he also presents stirring encouragements and cogent reasoning in favor of acting on those ideals.

That such a universally compassionate text existed at such an early time is a deeply encouraging fact in one sense. Yet in another sense, it is deeply discouraging. That is, when we think about all the suffering, wars, and atrocities that humanity has caused since Shantideva expounded these ideals — centuries upon centuries of brutal violence and torment imposed upon human and non-human beings — it seems that a certain pessimistic viewpoint gains support.

In particular, it seems that we should be pessimistic about notions along the lines of “compassionate ideals presented in a compelling way will eventually create a benevolent world”. After all, even today, 1300 years later, where we generally pride ourselves of being far more civilized and morally developed than our ancestors, we are still painfully far from observing the most basic of compassionate ideals in relation to other sentient beings.

Of course, one might think that the problem is merely that people have yet to be exposed to compassionate ideals such as those of Shantideva — or those of Mahavira or Mozi, both of whom lived more than a thousand years before Shantideva. But even if we grant that this is the main problem, it still seems that historical cases like these give us some reason to doubt whether most people ever will be exposed to such compassionate ideals, or whether most people would accept such ideals upon being exposed to them, let alone be willing to act on them. The fact that these memes have not caught on to a greater degree than they have, despite existing in such developed forms a long time ago, is some evidence that they are not nearly as virulent as many of us would have hoped.

Speaking for myself at least, I can say that I used to think that people just needed to be exposed to certain compassionate ideals and compassion-based arguments, and then they would change their minds and behaviors due to the sheer compelling nature of these ideals and arguments. But my experience over the years, e.g. with animal advocacy, have made me far more pessimistic about the force of such arguments. And the limited influence of sophisticated expositions of these ideals and arguments made many centuries ago is further evidence for that pessimism (relative to my previous expectations).

Of course, this is not to say that we can necessarily do better than to promote compassion-based ideals and arguments. It is merely to say that the best we can do might be a lot less significant — or be less likely to succeed — than what many of us had initially expected.

Lewis Gompertz and J. Howard Moore in the 19th century

Lewis Gompertz (ca. 1784-1861) and J. Howard Moore (1862-1916) both have a lot in common with Shantideva, as they likewise wrote about compassionate ethics relating to all sentient beings. (And all three of them touched on wild-animal suffering.) Yet Gompertz and Moore, along with other figures in the 19th century, wrote more explicitly about animal rights and moral vegetarianism than did Shantideva. Two observations seem noteworthy with regard to these writings.

One is that Gompertz and Moore both wrote about these topics before the rise of factory farming. That is, even though authors such as Gompertz and Moore made strong arguments against exploiting and killing other animals in the 19th century, humanity still went on to exploit and kill beings on a far greater scale than ever before in the 20th century, indeed on a scale that is still increasing today.

This may be a lesson for those who are working to reduce risks of astronomical suffering at present: even if you make convincing arguments against a moral atrocity that humanity is committing or otherwise heading toward, and even if you make these arguments at an early stage where the atrocity has yet to (fully) develop, this might still not be enough to prevent it from happening on a continuously expanding scale.

The second and closely related observation is that Gompertz and Moore both seem to have focused exclusively on animal exploitation as it existed in their own times. They did not appear to focus on preventing the problem from getting worse, even though one could argue, in hindsight, that such a strategy might have been more helpful overall.

Indeed, even though Moore’s outlook was quite pessimistic, he still seems to have been rather optimistic about the future. For instance, in the preface to his book The Universal Kinship (1906), he wrote: “The time will come when the sentiments of these pages will not be hailed by two or three, and ridiculed or ignored by the rest; they will represent Public Opinion and Law.”

Gompertz appeared similarly optimistic about the future, as he in his Moral Inquiries (1824, p. 48) wrote: “though I cannot conceive how any person can shut his eyes to the general state of misery throughout the universe, I still think that it is for a wise purpose; that the evils of life, which could not properly be otherwise, will in the course of time be rectified …” Neither Gompertz nor Moore seem to have predicted that animal exploitation would be getting far worse in many ways (e.g. the horrible conditions of factory farms) or that it would increase vastly in scale.

This second observation might likewise carry lessons for animal activists and suffering reducers today. If these leading figures of 19th-century animal activism tacitly underestimated the risk that things might get far worse in the future, and as a result paid insufficient attention to such risks, could it be the case that most activists today are similarly underestimating and underprioritizing future risks of things getting even worse still? This question is at least worth pondering.

On a general and concluding note, it seems important to be aware of our tendencies to entertain wishful thinking and to be under the spell of the illusion of control. Just because a group of people have embraced some broadly compassionate values, and in turn identified ongoing atrocities and future risks based on those values, it does not mean that those people will be able to steer humanity’s future such that we avoid these atrocities and risks. The sad reality is that universally compassionate values are far from being in charge.

Blog at WordPress.com.

Up ↑