When Machines Improve Machines

The following is an excerpt from my book Reflections on Intelligence (2016/2020).

 

The term “Artificial General Intelligence” (AGI) refers to a machine that can perform any task at least as well as any human. This is often considered the holy grail of artificial intelligence research, and also the thing that many consider likely to give rise to an “intelligence explosion”, the reason being that machines then will be able to take over the design of smarter machines, and hence their further development will no longer be held back by the slowness of humans. Luke Muehlhauser and Anna Salamon express the idea in the following way:

Once human programmers build an AI with a better-than-human capacity for AI design, the instrumental goal for self-improvement may motivate a positive feedback loop of self-enhancement. Now when the machine intelligence improves itself, it improves the intelligence that does the improving.

(Muehlhauser & Salamon, 2012, p. 13)

This seems like a radical shift, yet is it really? As author and software engineer Ramez Naam has pointed out (Naam, 2010), not quite, since we already use our latest technology to improve on itself and build the next generation of technology. As I argued in the previous chapter, the way new tools are built and improved is by means of an enormous conglomerate of tools, and newly developed tools merely become an addition to this existing set of tools. In Naam’s words:

[A] common assertion is that the advent of greater-than-human intelligence will herald The Singularity. These super intelligences will be able to advance science and technology faster than unaugmented humans can. They’ll be able to understand things that baseline humans can’t. And perhaps most importantly, they’ll be able to use their superior intellectual powers to improve on themselves, leading to an upward spiral of self improvement with faster and faster cycles each time.

In reality, we already have greater-than-human intelligences. They’re all around us. And indeed, they drive forward the frontiers of science and technology in ways that unaugmented individual humans can’t.

These superhuman intelligences are the distributed intelligences formed of humans, collaborating with one another, often via electronic means, and almost invariably with support from software systems and vast online repositories of knowledge.

(Naam, 2010)

The design and construction of new machines is not the product of human ingenuity alone, but of a large system of advanced tools in which human ingenuity is just one component, albeit a component that plays many roles. And these roles, it must be emphasized, go way beyond mere software engineering – they include everything from finding ways to drill and transport oil more effectively, to coordinating sales and business agreements across countless industries.

Moreover, as Naam hints, superhuman intellectual abilities already play a crucial role in this design process. For example, computer programs make illustrations and calculations that no human could possibly make, and these have become indispensable components in the design of new tools in virtually all technological domains. In this way, superhuman intellectual abilities are already a significant part of the process of building superhuman intellectual abilities. This has led to continued growth, yet hardly an intelligence explosion.

Naam gives a specific example of an existing self-improving “superintelligence” (a “super” goal achiever, that is), namely Intel:

Intel employs giant teams of humans and computers to design the next generation of its microprocessors. Faster chips mean that the computers it uses in the design become more powerful. More powerful computers mean that Intel can do more sophisticated simulations, that its CAD (computer aided design) software can take more of the burden off of the many hundreds of humans working on each chip design, and so on. There’s a direct feedback loop between Intel’s output and its own capabilities. …

Self-improving superintelligences have changed our lives tremendously, of course. But they don’t seem to have spiraled into a hard takeoff towards “singularity”. On a percentage basis, Google’s growth in revenue, in employees, and in servers have all slowed over time. It’s still a rapidly growing company, but that growth rate is slowly decelerating, not accelerating. The same is true of Intel and of the bulk of tech companies that have achieved a reasonable size. Larger typically means slower growing.

My point here is that neither superintelligence nor the ability to improve or augment oneself always lead to runaway growth. Positive feedback loops are a tremendously powerful force, but in nature (and here I’m liberally including corporate structures and the worldwide market economy in general as part of ‘nature’) negative feedback loops come into play as well, and tend to put brakes on growth.

(Naam, 2010)

I quote Naam at length here because he makes this important point well, and because he is an expert with experience in the pursuit of using technology to make better technology. In addition to Naam’s point about Intel and other companies that improve themselves, I would add that although these are enormous competent collectives, they still only constitute a tiny part of the larger collective system that is the world economy that they contribute modestly to, and which they are entirely dependent upon.

“The” AI?

The discussion above hints at a deeper problem in the scenario Muelhauser and Salomon lay out, namely the idea that we will build an AI that will be a game-changer. This idea seems widespread in modern discussions about both risks and opportunities of AI. Yet why should this be the case? Why should the most powerful software competences we develop in the future be concentrated into anything remotely like a unitary system?

The human mind is unitary and trapped inside a single skull for evolutionary reasons. The only way additional cognitive competences could be added was by lumping them onto the existing core in gradual steps. But why should the extended “mind” of software that we build to expand our capabilities be bound in such a manner? In terms of the current and past trends of the development of this “mind”, it only seems to be developing in the opposite direction: toward diversity, not unity. The pattern of distributed specialization mentioned in the previous chapter is repeating itself in this area as well. What we see is many diverse systems used by many diverse systems in a complex interplay to create ever more, increasingly diverse systems. We do not appear to be headed toward any singular super-powerful system, but instead toward an increasingly powerful society of systems (Kelly, 2010).

Greater Than Individual or Collective Human Abilities?

This also hints at another way in which our speaking of “intelligent machines” is somewhat deceptive and arbitrary. For why talk about the point at which these machines become as capable as human individuals rather than, say, an entire human society? After all, it is not at the level of individuals that accomplishments such as machine building occurs, but rather at the level of the entire economy. If we talked about the latter, it would be clear to us, I think, that the capabilities that are relevant for the accomplishment of any real-world goal are many and incredibly diverse, and that they are much more than just intellectual: they also require mechanical abilities and a vast array of materials.

If we talked about “the moment” when machines can do everything a society can, we would hardly be tempted to think of these machines as being singular in kind. Instead, we would probably think of them as a society of sorts, one that must evolve and adapt gradually. And I see no reason why we should not think about the emergence of “intelligent machines” with abilities that surpass human intellectual abilities in the same way.

After all, this is exactly what we see today: we gradually build new machines – both software and hardware – that can do things better than human individuals, but these are different machines that do different things better than humans. Again, there is no trend toward the building of disproportionally powerful, unitary machines. Yes, we do see some algorithms that are impressively general in nature, but their generality and capabilities still pale in comparison to the generality and the capabilities of our larger collective of ever more diverse tools (as is also true of individual humans).

Relatedly, the idea of a “moment” or “event” at which machines surpass human abilities is deeply problematic in the first place. It ignores the many-faceted nature of the capabilities to be surpassed, both in the case of human individuals and human societies, and, by extension, the gradual nature of the surpassing of these abilities. Machines have been better than humans at many tasks for centuries, yet we continue to speak as though there will be something like a “from-nothing-to-everything” moment – e.g. “once human programmers build an AI with a better-than-human capacity for AI design”. Again, this is not congruous with the way in which we actually develop software: we already have software that is superhuman in many regards, and this software already plays a large role in the collective system that builds smarter machines.

A Familiar Dynamic

It has always been the latest, most advanced tools that, in combination with the already existing set of tools, have collaborated to build the latest, most advanced tools. The expected “machines building machines” revolution is therefore not as revolutionary as it seems at first sight. The “once machines can program AI better than humans” argument seems to assume that human software engineers are the sole bottleneck of progress in the building of more competent machines, yet this is not the case. But even if it were, and if we suddenly had a thousand times as many people working to create better software, other bottlenecks would quickly emerge – materials, hardware production, energy, etc. All of these things, indeed the whole host of tasks that maintain and grow our economy, are crucial for the building of more capable machines. Essentially, we are returned to the task of advancing our entire economy, something that pretty much all humans and machines are participating in already, knowingly or not, willingly or not.

By themselves, the latest, most advanced tools do not do much. A CAD program alone is not going to build much, and the same holds true of the entire software industry. In spite of all its impressive feats, it is still just another cog in a much grander machinery.

Indeed, to say that software alone can lead to an “intelligence explosion” – i.e. a capability explosion – is akin to saying that a neuron can hold a conversation. Such statements express a fundamental misunderstanding of the level at which these accomplishments are made. The software industry, like any software program in particular, relies on the larger economy in order to produce progress of any kind, and the only way it can do so is by becoming part of – i.e. working with and contributing to – this grander system that is the entire economy. Again, individual goal-achieving ability is a function of the abilities of the collective. And it is here, in the entire economy, that the greatest goal-achieving ability is found, or rather distributed.

The question concerning whether “intelligence” can explode is therefore essentially: can the economy explode? To which we can answer that rapid increases in the growth rate of the world economy certainly have occurred in the past, and some argue that this is likely to happen again in the future (Hanson 1998/2000, 2016). However, there are reasons to be skeptical of such a future growth explosion (Murphy, 2011; Modis, 2012; Gordon, 2016; Caplan, 2016; Vinding, 2017b; Cowen & Southwood, 2019).

“Intelligence Though!” – A Bad Argument

A type of argument often made in discussions about the future of AI is that we can just never know what a “superintelligent machine” could do. “It” might be able to do virtually anything we can think of, and much more than that, given “its” vastly greater “intelligence”.

The problem with this argument is that it again rests on a vague notion of “intelligence” that this machine “has a lot of”. For what exactly is this “stuff” it has a lot of? Goal-achieving ability? If so, then, as we saw in the previous chapter, “intelligence” requires an enormous array of tools and tricks that entails much more than mere software. It cannot be condensed into anything we can identify as a single machine.

Claims of the sort that a “superintelligent machine” could just do this or that complex task are extremely vague, since the nature of this “superintelligent machine” is not accounted for, and neither are the plausible means by which “it” will accomplish the extraordinarily difficult – perhaps even impossible – task in question. Yet such claims are generally taken quite seriously nonetheless, the reason being that the vague notion of “intelligence” that they rest upon is taken seriously in the first place. This, I have tried to argue, is the cardinal mistake.

We cannot let a term like “superintelligence” provide a carte blanche to make extraordinary claims or assumptions without a bare minimum of justification. I think Bostrom’s book Superintelligence is an example of this. Bostrom worries about a rapid “intelligence explosion” initiated by “an AI” throughout the book, yet offers very little in terms of arguments for why we should believe that such a rapid explosion is plausible (Hanson, 2014), not to mention what exactly it is that is supposed to explode (Hanson, 2010; 2011a).

No Singular Thing, No Grand Control Problem

The problem is that we talk about “intelligence” as though it were a singular thing; or, in the words of brain and AI researcher Jeff Hawkins, as though it were “some sort of magic sauce” (Hawkins, 2015). This is also what gives rise to the idea that “intelligence” can explode, because one of the things that this “intelligence” can do, if you have enough of it, is to produce more “intelligence”, which can in turn produce even more “intelligence”.

This stands in stark contrast to the view that “intelligence” – whether we talk about cognitive abilities in particular or goal-achieving abilities in general – is anything but singular in nature, but rather the product of countless clever tricks and hacks built by a long process of testing and learning. On this latter view, there is no single master problem to crack for increasing “intelligence”, but rather just many new tricks and hacks we can discover. And finding these is essentially what we have always been doing in science and engineering.

Robin Hanson makes a similar point in relation to his skepticism of a “blank-slate AI mind-design” intelligence explosion:

Sure if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.

(Hanson, 2010)

Rather than a concentrated center of capability that faces a grand control problem, what we see is a development of tools and abilities that are distributed throughout the larger economy. And we “control” – i.e. specify the function of – these tools, including software programs, gradually as we make them and put them to use in practice. The design of the larger system is thus the result of our solutions to many, comparatively small “control problems”. I see no compelling reason to believe that the design of the future will be any different.


See also Chimps, Humans, and AI: A Deceptive Analogy.

Response to a Conversation on “Intelligence”

I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.

The statement that gave rise to the conversation was apparently this:

> Intelligence is the only advantage we have over lions.

My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.

I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.

There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you’re also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)

The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.

And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.

For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allows us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.

Magnus Vinding seems to think that because humans do all the cool stuff “only because of tools,” innate intelligence differences are not very consequential.

I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.

It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.

This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.

I wrote a section on Von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John Von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting Von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:

“If you say why not bomb them tomorrow, I say why not today? If you say today at five o’ clock, I say why not one o’ clock?”

To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).

One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it’s a collection of modules that happen to correlate in humans for some reason that I don’t yet understand.

I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)

> The “threshold” between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).

So there’s a possible world full of lion-tailored tools where the lions are beating our asses all day?

Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.

Further down you acknowledge that the difference is “or maybe tailored to individuals with superior cognitive ability” – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.

I suspect David Pearce might say that that’s a parochially male thing to say. One could also say that the whole point of cognitive abilities is to make others feel good — a drive/task that has no doubt played a large role both for human survival and the increase in our cognitive abilities and goal-achieving abilities in general, arguably just as great as “making the most out of tool-shaped parts of the environment”.

Second, I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.

Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.

I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.

Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual employment of the word “intelligence” in general, in my view.

My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).

First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.

Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant with respect to the task of taking over the world? And how might they be likely to be created? Relevant questions to clarify, it seems to me.

Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable (closer to notional limits of “capabilities”) the more technologically advanced society is, that there would be more technology learned about and mastered by others to learn about and master in order to take over, and finally that society presumably will learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it.

 

This post was originally published at my old blog: http://magnusvinding.blogspot.dk/2017/07/response-to-conversation-on-intelligence.html

Blog at WordPress.com.

Up ↑