Ontological Possibilities and the Meaningfulness of Ethics

First written: Sep. 2017. Last update: Dec. 2025.

Are there different possible outcomes given the present state of the universe? One might think that much depends on our answer to this question. For example, if there are no alternative possible futures given the present state of the universe, one might think that ethics and efforts to improve the world would cease to make sense.

An Objection Against the Meaningfulness of Ethics

We can define “global ontological possibilities” as alternative possibilities that could result from the same state of the universe as a whole. Since alternative possibilities in a strong sense seem crucial to ethical deliberation, one might assume that global ontological possibilities are necessary for ethics to get off the ground, and indeed for engagement in ethical decision-making and action to make sense. On this assumption, one could argue that ethics does not make sense due to the non-existence of global ontological possibilities.

To be sure, the assumption that ethics requires global ontological possibilities is highly controversial. For example, one may hold that we can have genuine ontological possibilities at a relative level even if there are no global ontological possibilities, and hold that ethics is meaningful given such relative possibilities. Or one could maintain that purely epistemic or ex-ante possibilities are enough for ethics to make sense.

Yet my goal in this essay is not to question the assumption above. Instead, I will argue that even if one thinks global ontological possibilities are required for ethics to make sense, one cannot reasonably reject the meaningfulness of ethics based on the claim that such possibilities do not exist.

Key Premise: Humility Is Warranted

We do not know whether global ontological possibilities exist. Given our limited understanding of the fundamental nature of reality, it seems reasonable to maintain a degree of humility on this question. Indeed, even if we have reasons to believe that possibilities of this kind most likely do not exist, it still seems overconfident to assign more than, say, a 99.9 percent probability to their non-existence.

Note that the exact probability we assign to the potential existence of global ontological possibilities is not important. The point here is simply that, from our epistemic vantage point, there is a non-zero probability that global ontological possibilities exist.

Why the Objection Fails

The probabilistic premise above implies that it is unwarranted to reject ethics based on the supposed non-existence of global ontological possibilities.

To see why, consider the claim that risks of very bad future outcomes are low. Even if this claim were true, it would not follow that such risks can reasonably be dismissed. After all, when the stakes are sufficiently high, it is not reasonable to dismiss low probabilities. And when we are discussing the meaningfulness of ethics, the stakes could in some sense not be greater, since what is at issue is whether there are any stakes at all. Given such total stakes, even extremely low probabilities are worth taking seriously. Therefore, the mere epistemic possibility that global ontological possibilities are real is sufficient for undermining the above-mentioned objection against the meaningfulness of ethics.

Moreover, when considering the conceivable scenarios before us, an asymmetry emerges in support of the same conclusion. If global ontological possibilities are real, and if ethical action roughly amounts to realizing the best of these possibilities — or at least avoiding the worst — we seem to have good reason to try to realize the better over the worse of these possibilities. On the other hand, if such possibilities are not real, trying to create a better world appears to have no downside in terms of which global ontological possibilities end up getting realized. Thus, when considering these two horns, we seem to have a strong reason in favor of trying to create a better world, and no reason against it.

In sum, even if we grant the controversial premise that ethics requires global ontological possibilities, it does not follow that ethics is meaningless. Given our uncertainty about whether such possibilities exist, and given what is at stake, we have good reason to pursue ethical deliberation and action regardless.

Response to a Conversation on “Intelligence”

I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.

The statement that gave rise to the conversation was apparently this:

> Intelligence is the only advantage we have over lions.

My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.

I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.

There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you’re also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)

The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.

And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.

For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allows us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.

Magnus Vinding seems to think that because humans do all the cool stuff “only because of tools,” innate intelligence differences are not very consequential.

I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.

It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.

This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.

I wrote a section on von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:

“If you say why not bomb them tomorrow, I say why not today? If you say today at five o’ clock, I say why not one o’ clock?”

To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).

One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it’s a collection of modules that happen to correlate in humans for some reason that I don’t yet understand.

I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)

> The “threshold” between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).

So there’s a possible world full of lion-tailored tools where the lions are beating our asses all day?

Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.

Further down you acknowledge that the difference is “or maybe tailored to individuals with superior cognitive ability” – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.

I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.

Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.

I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.

Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual employment of the word “intelligence” in general, in my view.

My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).

First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.

Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant with respect to the task of taking over the world? And how might they be likely to be created? Relevant questions to clarify, it seems to me.

Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable, and thus be closer to notional limits of “capabilities”, the more technologically advanced society is; there would be more technology mastered by others that an AI system would need to learn and master in order to take over; and finally, society may learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it.

This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2017/07/response-to-conversation-on-intelligence.html

Blog at WordPress.com.

Up ↑