When Machines Improve Machines

The following is an excerpt from my book Reflections on Intelligence (2016/2020).

 

The term “Artificial General Intelligence” (AGI) refers to a machine that can perform any task at least as well as any human. This is often considered the holy grail of artificial intelligence research, and also the thing that many consider likely to give rise to an “intelligence explosion”, the reason being that machines then will be able to take over the design of smarter machines, and hence their further development will no longer be held back by the slowness of humans. Luke Muehlhauser and Anna Salamon express the idea in the following way:

Once human programmers build an AI with a better-than-human capacity for AI design, the instrumental goal for self-improvement may motivate a positive feedback loop of self-enhancement. Now when the machine intelligence improves itself, it improves the intelligence that does the improving.

(Muehlhauser & Salamon, 2012, p. 13)

This seems like a radical shift, yet is it really? As author and software engineer Ramez Naam has pointed out (Naam, 2010), not quite, since we already use our latest technology to improve on itself and build the next generation of technology. As I argued in the previous chapter, the way new tools are built and improved is by means of an enormous conglomerate of tools, and newly developed tools merely become an addition to this existing set of tools. In Naam’s words:

[A] common assertion is that the advent of greater-than-human intelligence will herald The Singularity. These super intelligences will be able to advance science and technology faster than unaugmented humans can. They’ll be able to understand things that baseline humans can’t. And perhaps most importantly, they’ll be able to use their superior intellectual powers to improve on themselves, leading to an upward spiral of self improvement with faster and faster cycles each time.

In reality, we already have greater-than-human intelligences. They’re all around us. And indeed, they drive forward the frontiers of science and technology in ways that unaugmented individual humans can’t.

These superhuman intelligences are the distributed intelligences formed of humans, collaborating with one another, often via electronic means, and almost invariably with support from software systems and vast online repositories of knowledge.

(Naam, 2010)

The design and construction of new machines is not the product of human ingenuity alone, but of a large system of advanced tools in which human ingenuity is just one component, albeit a component that plays many roles. And these roles, it must be emphasized, go way beyond mere software engineering – they include everything from finding ways to drill and transport oil more effectively, to coordinating sales and business agreements across countless industries.

Moreover, as Naam hints, superhuman intellectual abilities already play a crucial role in this design process. For example, computer programs make illustrations and calculations that no human could possibly make, and these have become indispensable components in the design of new tools in virtually all technological domains. In this way, superhuman intellectual abilities are already a significant part of the process of building superhuman intellectual abilities. This has led to continued growth, yet hardly an intelligence explosion.

Naam gives a specific example of an existing self-improving “superintelligence” (a “super” goal achiever, that is), namely Intel:

Intel employs giant teams of humans and computers to design the next generation of its microprocessors. Faster chips mean that the computers it uses in the design become more powerful. More powerful computers mean that Intel can do more sophisticated simulations, that its CAD (computer aided design) software can take more of the burden off of the many hundreds of humans working on each chip design, and so on. There’s a direct feedback loop between Intel’s output and its own capabilities. …

Self-improving superintelligences have changed our lives tremendously, of course. But they don’t seem to have spiraled into a hard takeoff towards “singularity”. On a percentage basis, Google’s growth in revenue, in employees, and in servers have all slowed over time. It’s still a rapidly growing company, but that growth rate is slowly decelerating, not accelerating. The same is true of Intel and of the bulk of tech companies that have achieved a reasonable size. Larger typically means slower growing.

My point here is that neither superintelligence nor the ability to improve or augment oneself always lead to runaway growth. Positive feedback loops are a tremendously powerful force, but in nature (and here I’m liberally including corporate structures and the worldwide market economy in general as part of ‘nature’) negative feedback loops come into play as well, and tend to put brakes on growth.

(Naam, 2010)

I quote Naam at length here because he makes this important point well, and because he is an expert with experience in the pursuit of using technology to make better technology. In addition to Naam’s point about Intel and other companies that improve themselves, I would add that although these are enormous competent collectives, they still only constitute a tiny part of the larger collective system that is the world economy that they contribute modestly to, and which they are entirely dependent upon.

“The” AI?

The discussion above hints at a deeper problem in the scenario Muelhauser and Salomon lay out, namely the idea that we will build an AI that will be a game-changer. This idea seems widespread in modern discussions about both risks and opportunities of AI. Yet why should this be the case? Why should the most powerful software competences we develop in the future be concentrated into anything remotely like a unitary system?

The human mind is unitary and trapped inside a single skull for evolutionary reasons. The only way additional cognitive competences could be added was by lumping them onto the existing core in gradual steps. But why should the extended “mind” of software that we build to expand our capabilities be bound in such a manner? In terms of the current and past trends of the development of this “mind”, it only seems to be developing in the opposite direction: toward diversity, not unity. The pattern of distributed specialization mentioned in the previous chapter is repeating itself in this area as well. What we see is many diverse systems used by many diverse systems in a complex interplay to create ever more, increasingly diverse systems. We do not appear to be headed toward any singular super-powerful system, but instead toward an increasingly powerful society of systems (Kelly, 2010).

Greater Than Individual or Collective Human Abilities?

This also hints at another way in which our speaking of “intelligent machines” is somewhat deceptive and arbitrary. For why talk about the point at which these machines become as capable as human individuals rather than, say, an entire human society? After all, it is not at the level of individuals that accomplishments such as machine building occurs, but rather at the level of the entire economy. If we talked about the latter, it would be clear to us, I think, that the capabilities that are relevant for the accomplishment of any real-world goal are many and incredibly diverse, and that they are much more than just intellectual: they also require mechanical abilities and a vast array of materials.

If we talked about “the moment” when machines can do everything a society can, we would hardly be tempted to think of these machines as being singular in kind. Instead, we would probably think of them as a society of sorts, one that must evolve and adapt gradually. And I see no reason why we should not think about the emergence of “intelligent machines” with abilities that surpass human intellectual abilities in the same way.

After all, this is exactly what we see today: we gradually build new machines – both software and hardware – that can do things better than human individuals, but these are different machines that do different things better than humans. Again, there is no trend toward the building of disproportionally powerful, unitary machines. Yes, we do see some algorithms that are impressively general in nature, but their generality and capabilities still pale in comparison to the generality and the capabilities of our larger collective of ever more diverse tools (as is also true of individual humans).

Relatedly, the idea of a “moment” or “event” at which machines surpass human abilities is deeply problematic in the first place. It ignores the many-faceted nature of the capabilities to be surpassed, both in the case of human individuals and human societies, and, by extension, the gradual nature of the surpassing of these abilities. Machines have been better than humans at many tasks for centuries, yet we continue to speak as though there will be something like a “from-nothing-to-everything” moment – e.g. “once human programmers build an AI with a better-than-human capacity for AI design”. Again, this is not congruous with the way in which we actually develop software: we already have software that is superhuman in many regards, and this software already plays a large role in the collective system that builds smarter machines.

A Familiar Dynamic

It has always been the latest, most advanced tools that, in combination with the already existing set of tools, have collaborated to build the latest, most advanced tools. The expected “machines building machines” revolution is therefore not as revolutionary as it seems at first sight. The “once machines can program AI better than humans” argument seems to assume that human software engineers are the sole bottleneck of progress in the building of more competent machines, yet this is not the case. But even if it were, and if we suddenly had a thousand times as many people working to create better software, other bottlenecks would quickly emerge – materials, hardware production, energy, etc. All of these things, indeed the whole host of tasks that maintain and grow our economy, are crucial for the building of more capable machines. Essentially, we are returned to the task of advancing our entire economy, something that pretty much all humans and machines are participating in already, knowingly or not, willingly or not.

By themselves, the latest, most advanced tools do not do much. A CAD program alone is not going to build much, and the same holds true of the entire software industry. In spite of all its impressive feats, it is still just another cog in a much grander machinery.

Indeed, to say that software alone can lead to an “intelligence explosion” – i.e. a capability explosion – is akin to saying that a neuron can hold a conversation. Such statements express a fundamental misunderstanding of the level at which these accomplishments are made. The software industry, like any software program in particular, relies on the larger economy in order to produce progress of any kind, and the only way it can do so is by becoming part of – i.e. working with and contributing to – this grander system that is the entire economy. Again, individual goal-achieving ability is a function of the abilities of the collective. And it is here, in the entire economy, that the greatest goal-achieving ability is found, or rather distributed.

The question concerning whether “intelligence” can explode is therefore essentially: can the economy explode? To which we can answer that rapid increases in the growth rate of the world economy certainly have occurred in the past, and some argue that this is likely to happen again in the future (Hanson 1998/2000, 2016). However, there are reasons to be skeptical of such a future growth explosion (Murphy, 2011; Modis, 2012; Gordon, 2016; Caplan, 2016; Vinding, 2017b; Cowen & Southwood, 2019).

“Intelligence Though!” – A Bad Argument

A type of argument often made in discussions about the future of AI is that we can just never know what a “superintelligent machine” could do. “It” might be able to do virtually anything we can think of, and much more than that, given “its” vastly greater “intelligence”.

The problem with this argument is that it again rests on a vague notion of “intelligence” that this machine “has a lot of”. For what exactly is this “stuff” it has a lot of? Goal-achieving ability? If so, then, as we saw in the previous chapter, “intelligence” requires an enormous array of tools and tricks that entails much more than mere software. It cannot be condensed into anything we can identify as a single machine.

Claims of the sort that a “superintelligent machine” could just do this or that complex task are extremely vague, since the nature of this “superintelligent machine” is not accounted for, and neither are the plausible means by which “it” will accomplish the extraordinarily difficult – perhaps even impossible – task in question. Yet such claims are generally taken quite seriously nonetheless, the reason being that the vague notion of “intelligence” that they rest upon is taken seriously in the first place. This, I have tried to argue, is the cardinal mistake.

We cannot let a term like “superintelligence” provide a carte blanche to make extraordinary claims or assumptions without a bare minimum of justification. I think Bostrom’s book Superintelligence is an example of this. Bostrom worries about a rapid “intelligence explosion” initiated by “an AI” throughout the book, yet offers very little in terms of arguments for why we should believe that such a rapid explosion is plausible (Hanson, 2014), not to mention what exactly it is that is supposed to explode (Hanson, 2010; 2011a).

No Singular Thing, No Grand Control Problem

The problem is that we talk about “intelligence” as though it were a singular thing; or, in the words of brain and AI researcher Jeff Hawkins, as though it were “some sort of magic sauce” (Hawkins, 2015). This is also what gives rise to the idea that “intelligence” can explode, because one of the things that this “intelligence” can do, if you have enough of it, is to produce more “intelligence”, which can in turn produce even more “intelligence”.

This stands in stark contrast to the view that “intelligence” – whether we talk about cognitive abilities in particular or goal-achieving abilities in general – is anything but singular in nature, but rather the product of countless clever tricks and hacks built by a long process of testing and learning. On this latter view, there is no single master problem to crack for increasing “intelligence”, but rather just many new tricks and hacks we can discover. And finding these is essentially what we have always been doing in science and engineering.

Robin Hanson makes a similar point in relation to his skepticism of a “blank-slate AI mind-design” intelligence explosion:

Sure if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.

(Hanson, 2010)

Rather than a concentrated center of capability that faces a grand control problem, what we see is a development of tools and abilities that are distributed throughout the larger economy. And we “control” – i.e. specify the function of – these tools, including software programs, gradually as we make them and put them to use in practice. The design of the larger system is thus the result of our solutions to many, comparatively small “control problems”. I see no compelling reason to believe that the design of the future will be any different.


See also Chimps, Humans, and AI: A Deceptive Analogy.

Chimps, Humans, and AI: A Deceptive Analogy

The prospect of smarter-than-human artificial intelligence (AI) is often presented and thought of in terms of a simple analogy: AI will stand in relation to us the way we stand in relation to chimps. In other words, AI will be qualitatively more competent and powerful than us, and its actions will be as inscrutable to humans as current human endeavors (e.g. science and politics) are to chimps.

My aim in this essay is to show that this is in many ways a false analogy. The difference in understanding and technological competence found between modern humans and chimps is, in an important sense, a zero-to-one difference that cannot be repeated.

How are humans different from chimps?

A common answer to this question is that humans are smarter. Specifically, at the level of our individual cognitive abilities, humans, with our roughly three times larger brains, are just far more capable.

This claim no doubt contains a large grain of truth, as humans surely do beat chimps in a wide range of cognitive tasks. Yet it is also false in some respects. For example, chimps have superior working memory compared to humans, and apparently also beat humans in certain video games, including games involving navigation in complex mazes.

But researchers who study human uniqueness actually provide some rather different, more specific answers to this question. If we focus on individual mental differences in particular, researchers have found that, crudely speaking, humans are different from chimps in three principal ways: 1) we can learn language, 2) we have a strong orientation toward social learning, and 3) we are highly cooperative (among our ingroup, compared to chimps).

These differences have in turn resulted in two qualitative differences in the abilities of humans and chimps in today’s world.

I. Symbolic language

The first is that we humans have acquired an ability to think and communicate in terms of symbolic language that represents elaborate concepts. We can learn about the deep history of life and the universe, as well as the likely future of the universe, including the fundamental limits to future space travel and future computations. Any educated human can learn a good deal about these things whereas no chimp can.

Note how this is truly a zero-to-one difference: no symbolic language versus an elaborate symbolic language through which knowledge can be represented and continually developed (see chapter 1 in Deacon, 1997). It is the difference between having no science of physics versus having an elaborate such science with which we can predict future events and put hard limits on future possibilities.

This zero-to-one difference cannot really be repeated. Given that we already have physical models that predict, say, the future motion of planets and the solar system to a fairly high degree of accuracy, the best one can do in this respect is to (slightly) improve the accuracy of these predictions. Such further improvements cannot be compared to going from zero physics to current physics.

The same point applies to our scientific understanding more generally: we currently have theories that work decently well at explaining most of the phenomena around us. And though one can significantly improve the accuracy and sophistication of many of these theories, any such further improvement would be much less significant than the qualitative leap from absolutely no conceptual models to the entire collection of models and theories we currently have.

For example, going from no understanding of evolution by natural selection to the elaborate understanding of biology we have today cannot be matched, in terms of qualitative and revolutionary leaps, by further refinements in biology. We have already mapped out the core basics of biology (in fact a great deal more than that), and this can only be done once.

This is not an original point. Robin Hanson has made essentially the same point in response to the notion that future machines will be “as incomprehensible to us as we are to goldfish”:

This seems to me to ignore our rich multi-dimensional understanding of intelligence elaborated in our sciences of mind (computer science, AI, cognitive science, neuroscience, animal behavior, etc.).

… the ability of one mind to understand the general nature of another mind would seem mainly to depend on whether that first mind can understand abstractly at all, and on the depth and richness of its knowledge about minds in general. Goldfish do not understand us mainly because they seem incapable of any abstract comprehension. …

It seems to me that human cognition is general enough, and our sciences of mind mature enough, that we can understand much about quite a diverse zoo of possible minds, many of them much more capable than ourselves on many dimensions.

Ramez Naam has argued similarly in relation to the idea that there will be some future time or intelligence that we are fundamentally unable to understand. He argues that our understanding of the future is growing rather than shrinking as time progresses, and that AI and other future technologies will not be beyond comprehension:

All of those [future technologies] are still governed by the laws of physics. We can describe and model them through the tools of economics, game theory, evolutionary theory, and information theory. It may be that at some point humans or our descendants will have transformed the entire solar system into a living information processing entity — a Matrioshka Brain. We may have even done the same with the other hundred billion stars in our galaxy, or perhaps even spread to other galaxies.

Surely that is a scale beyond our ability to understand? Not particularly. I can use math to describe to you the limits on such an object, how much computing it would be able to do for the lifetime of the star it surrounded. I can describe the limit on the computing done by networks of multiple Matrioshka Brains by coming back to physics, and pointing out that there is a guaranteed latency in communication between stars, determined by the speed of light. I can turn to game theory and evolutionary theory to tell you that there will most likely be competition between different information patterns within such a computing entity, as its resources (however vast) are finite, and I can describe to you some of the dynamics of that competition and the existence of evolution, co-evolution, parasites, symbiotes, and other patterns we know exist.

Chimps cannot understand human politics and science to a similar extent. Thus, the truth is that there is a strong disanalogy between the understanding chimps have of humans versus the understanding that we humans — thanks to our conceptual tools — can have of any possible future intelligence (in physical and computational terms, say).

Note that the qualitative leap reviewed above was not one that happened shortly after human ancestors diverged from chimp ancestors. Instead, it was a much more recent leap that has been unfolding gradually since the first humans appeared, and which has continued to accelerate in recent centuries, as we have developed ever more advanced science and mathematics. In other words, this qualitative step has been a product of cultural evolution just as much as biological evolution. Early humans presumably had a roughly similar potential to learn modern language, science, mathematics, etc. But such conceptual tools could not be acquired in the absence of a surrounding culture able to teach these innovations.

Ramez Naam has made a similar point:

If there was ever a singularity in human history, it occurred when humans evolved complex symbolic reasoning, which enabled language and eventually mathematics and science. Homo sapiens before this point would have been totally incapable of understanding our lives today. We have a far greater ability to understand what might happen at some point 10 million years in the future than they would to understand what would happen a few tens of thousands of years in the future.

II. Cumulative technological innovation

The second zero-to-one difference between humans and chimps is that we humans build things. Not just that we build things, but that we refine our technology over time. After all, many non-human animals use tools in the form of sticks and stones, and some even shape primitive tools of their own. But only humans improve and build upon the technological inventions of their ancestors.

Consequently, humans are unique in expanding their abilities by systematically exploiting their environment, molding the things around them into ever more useful self-extensions. We have turned wildlands into crop fields; we have created technologies that can harvest energy — from oil, gas, wind, and sun — and we have built external memories far more reliable than our own, such as books and hard disks.

This is another qualitative leap that cannot be repeated: the step from having absolutely no cumulative technology to exploiting and optimizing our external environment toward our own ends. The step from having no external memory to having the current repository of stored human knowledge at our fingertips, and from harvesting absolutely no energy (other than through individual digestion) to collectively harvesting and using hundreds of quintillions of Joules every year.

To be sure, it is possible to improve on and expand these innovations. We can harvest greater amounts of energy, for example, and create even larger external memories. Yet these are merely quantitative differences, and humanity indeed continually makes such improvements each year. They are not zero-to-one differences that only a new species could bring about. And what is more, we know that the potential for making further technological improvements is, at least in many respects, quite limited.

Take energy efficiency as an example. Many of our machines and energy harvesting technologies have already reached a significant fraction of the maximally possible efficiency. For example, electric motors and pumps tend to have around 90 percent energy efficiency, and the best solar panels have an efficiency greater than 40 percent. So as a matter of hard physical limits, many of our technologies cannot be made orders of magnitude more efficient; in fact, a large number of them can at most be marginally improved.

In sum, we are unique in being the first species that systematically sculpted our surrounding environment and turned it into ever-improving tools, many of which have near-maximal efficiency. This step cannot be repeated, only expanded further.


Just like the qualitative leap in our symbolic reasoning skills, the qualitative leap in our ability to create technology and shape our environment emerged, not between chimps and early humans, but between early humans and today’s humans, as the result of a cultural process occurring over thousands of years. In fact, the two leaps have been closely related: our ability to reason and communicate symbolically has enabled us to create cumulative technological innovation. Conversely, our technologies have allowed us to refine our knowledge and conceptual tools, by enabling us to explore and experiment, which in turn made us able to build even better technologies with which we could advance our knowledge even further, and so on.

This, in a nutshell, is the story of the growth of human knowledge and technology, a story of recursive self-improvement (see Simler, 2019, “On scientific networks”). It is not really a story about the individual human brain per se. After all, the human brain does not accomplish much in isolation (nor is it the brain with the largest number of neurons; several species have more neurons in the forebrain). It is more a story about what happened between and around brains: in the exchange of information in networks of brains and in the external creations designed by them. A story made possible by the fact that the human brain is unique in being by far the most cultural brain of all, with its singular capacity to learn from and cooperate with others.

The range of human abilities is surprisingly wide

Another way in which an analogy to chimps is frequently drawn is by imagining an intelligence scale along which different species are ranked, such that, for example, we have “rats at 30, chimps at 60, the village idiot at 90, the average human at 98, and Einstein at 100”, and where future AI may in turn be ranked many hundreds of points higher than Einstein. According to this picture, it is not just that humans will stand in relation to AI the way chimps stand in relation to humans, but that AI will be far superior still. The human-chimp analogy is, on this view, a severe understatement of the difference between humans and future AI.

Such an intelligence scale may seem intuitively compelling, but how does it correspond to reality? One way to probe this question is to examine the range of human abilities in chess. The standard way to rank chess skills is with the Elo rating system, which is a good predictor of the outcomes of chess games between different players, whether human, digital, or otherwise.

An early human beginner will have a rating around 300, a novice around 800, and a rating in the range 2000-2199 is ranked as “Expert”. The highest rating ever achieved is 2882 by Magnus Carlsen.

How large is this range of chess skills in an absolute sense? Remarkably large, it turns out. For example, it took more than four decades from when computers were first able to beat a human chess novice (the 1950s), until a computer was able to beat the best human player (1997, officially). In other words, the span from novice to Kasparov corresponded to more than four decades of progress in hardware — i.e. a million times more computing power — and software. This alone suggests that the human range of chess skills is rather wide.

Yet the range seems even broader when we consider the upper bounds of chess performance. After all, the fact that it took computers decades to go from human novice to world champion does not mean that the best human is not still ridiculously far from the best a computer could be in theory. Surprisingly, however, this latter distance does in fact seem quite small. Estimates suggest that the best possible chess machine would have an Elo rating around 3600, which means that the relative distance between the best possible computer and the best human is only around 700 Elo points (the Elo rating is essentially a measure of relative distance; 700 Elo points corresponds to a winning percentage of around 1.5 percent for the losing player).

This implies that the distance between the best human (Carlsen) and a chess “Expert” (someone belonging to the top 5 percent of chess players) is similar to the distance between the best human and the best possible chess brain, while the distance between a human beginner and the best human is far greater (2500 Elo points). This stands in stark contrast to the intelligence scale outlined above, which would predict the complete opposite: the distance from a human novice to the best human should be comparatively small whereas the distance from the best human to the optimal brain should be the larger one by far.


It may be objected that chess is a bad example, and that it does not really reflect what is meant by the intelligence scale above. But the question is then what would be a better measure. After all, a similar story seems to apply to other games, such as shogi and go: the human range of abilities is surprisingly wide and the best players are significantly closer to optimal than they are to novice players.

In fact, one can argue that the objection should go in the opposite direction, as human brains are not built for chess, and hence we should expect even the best humans to be far from optimal at it. We should expect to be much closer to “optimal” at solving problems that are more important for our survival, such as social cognition and natural language processing — skills that most people are wired to master at super-Carlsen levels.

Regardless, the truth is that humans are mastering ever more “games”, literal as well as figurative ones, at optimal or near-optimal levels. Not because evolution “just so happened to stumble upon the most efficient way to assemble matter into an intelligent system”, but rather because it created a species able to make cultural and technological progress toward ever greater levels of competence.

The cultural basis of the human capability expansion

The intelligence scale outlined above misses two key points. First, human abilities are not a constant. Whether we speak of individual abilities (e.g. the abilities of elite chess players) or humanity’s collective abilities (e.g. building laptops and sending people to the moon), it is clear that our abilities have increased dramatically as our culture and technology have expanded.

Second, because human abilities are not a constant, the range of human abilities is far wider, in an absolute sense, than the intelligence scale outlined above suggests, as it has grown and still continues to grow over time.

Chess is a good example of this. Untrained humans and chimps have the same (non-)skill level at chess. Yet thanks to culture, some people can learn to master the game. A wealthy society can allow people to specialize in chess, and makes it possible for knowledge to accumulate in books and experts. Eventually, it enables learning from super-human chess engines, whose innovations we can adopt just as we do those of other humans.

And yet we humans expand our abilities to a much greater extent than the example of increased human chess abilities suggests, as we not only expand our abilities by stimulating our brains with progressively better forms of practice and information, but also by extending ourselves directly with technology. For example, we can all use a chess engine to find great chess moves for us. Our latest technologies enable us to accomplish ever-more tasks that no human could ever accomplish unaided.

Worth noting in this regard is that this self-extension process seems to have slowed down in recent decades, likely because we have reaped most low-hanging fruits already, and in some respects because it is impossible to improve things much further (we already mentioned energy efficiency as an example where we are getting close to the upper limits in many respects).

This suggests that not only is there not a qualitative leap similar to that between chimps and modern humans ahead of us, but that even a quantitative growth explosion, with relative growth rates significantly higher than what we have seen in the past, should not be our default expectation either (for some support for this claim, see “Peak growth might lie in the past” in Vinding, 2017).

Why this is relevant

The errors of the human-chimp analogy are worth highlighting for a few reasons. First, the analogy can lead us to overestimate how much everything will change with AI. It leads us to expect qualitative leaps of sorts that cannot be repeated.

Second, the human-chimp analogy makes us underestimate how much we currently know and are able to understand. To think that intelligent systems of the future will be as incomprehensible to us today as human affairs are to chimps is to underestimate how extensive and universal our current knowledge of the world in fact is — not just when it comes to physical and computational limits, but also in relation to general economic and game-theoretic principles. We know a good deal about economic growth, for example, and this knowledge has a lot to say about how we should expect future intelligent systems to grow. In particular, it suggests that local AI-FOOM growth is unlikely.

The analogy can thus have an insidious influence by making us feel like current data and trends cannot be trusted much, because look how different humans are from chimps, and look how puny the human brain is compared to ultimate limits. I think this is exactly the wrong way to think about the future. We should base our expectations on a deep study of past trends, including the actual evolution of human competences — not simple analogies.

Relatedly, the human-chimp analogy is also relevant in that it can lead us to grossly overestimate the probability of an AI-FOOM scenario. That is, if we get the story about the evolution of human competences so wrong that we think the differences we observe today between chimps and modern humans reduce mostly to a story about changes in individual brains, then we are likely to have similarly inaccurate expectations about what comparable innovations in some individual machine are able to effect on their own.

If the human-chimp analogy leads us to (marginally) overestimate the probability of a FOOM scenario, it may nudge us toward focusing too much on some single, concentrated future thing that we expect to be all-important: the AI that suddenly becomes qualitatively more competent than humans. In effect, the human-chimp analogy can lead us to neglect broader factors, such as cultural and institutional developments.

Note that the above is by no means a case for complacency about risks from AI. It is important that we get a clear picture of such risks, and that we allocate our resources accordingly. But this requires us to rely on accurate models of the world. If we overemphasize one set of risks, we are by necessity underemphasizing others.

The future of growth: near-zero growth rates

First written: Jul. 2017; Last update: May 2020.

Exponential growth is a common pattern found throughout nature. Yet it is also a pattern that tends not to last, as growth rates tend to decline sooner or later.

In biology, this pattern of exponential growth that wanes off is found in everything from the development of individual bodies — for instance, in the growth of humans, which levels off in the late teenage years — to population sizes.

One may of course be skeptical that this general trend will also apply to the growth of our technology and economy at large, as innovation seems to continually postpone our clash with the ceiling, yet it seems inescapable that it must. For in light of what we know about physics, we can conclude that exponential growth of the kinds we see today, in technology in particular and in our economy more generally, must come to an end, and do so relatively soon.

Limits to growth

Physical limits to computation and Moore’s law

One reason we can make this assertion is that there are theoretical limits to computation. As physicist Seth Lloyd’s calculations show, a continuation of Moore’s law — in its most general formulation: “the amount of information that computers are capable of processing and the rate at which they process it doubles every two years” — would imply that we hit the theoretical limits of computation within 250 years:

If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore’s law into the future, then it will only take two hundred and fifty years to make up the forty orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our one kilogram ultimate laptop that performs 1051 operations per second on 1031 bits.

Similarly, physicists Lawrence Krauss and Glenn Starkman have calculated that, even if we factor in colonization of space at the speed of light, this doubling of processing power cannot continue for more than 600 years in any civilization:

Our estimate for the total information processing capability of any system in our Universe implies an ultimate limit on the processing capability of any system in the future, independent of its physical manifestation and implies that Moore’s Law cannot continue unabated for more than 600 years for any technological civilization.

In a more recent lecture and a subsequent interview, Krauss said that the absolute limit for the continuation of Moore’s law, in our case, would be reached in less than 400 years (the discrepancy — between the numbers 400 and 600 — is at least in part because Moore’s law, in its most general formulation, has played out for more than a century in our civilization at this point). And, as both Krauss and Lloyd have stressed, these are ultimate theoretical limits, resting on assumptions that are unlikely to be met in practice, such as expansion at the speed of light. What is possible, in terms of how long Moore’s law can continue for, given both engineering and economic constraints is likely significantly less. Indeed, we are already close to approaching the physical limits of the paradigm that Moore’s law has been riding on for more than 50 years — silicon transistors, the only paradigm that Gordon Moore was talking about originally — and it is not clear whether other paradigms will be able to take over and keep the trend going.

Limits to the growth of energy use

Physicist Tom Murphy has calculated a similar limit for the growth of the energy consumption of our civilization. Based on the observation that the energy consumption of the United States has increased fairly consistently with an average annual growth rate of 2.9 percent over the last 350 odd years (although the growth rate appears to have slowed down in recent times and been stably below 2.9 since c. 1980), Murphy proceeds to derive the limits for the continuation of similar energy growth. He does this, however, by assuming an annual growth rate of “only” 2.3 percent, which conveniently results in an increase of the total energy consumption by a factor of ten every 100 years. If we assume that we will continue expanding our energy use at this rate by covering Earth with solar panels, this would, on Murphy’s calculations, imply that we will have to cover all of Earth’s land with solar panels in less than 350 years, and all of Earth, including the oceans, in 400 years.

Beyond that, assuming that we could capture all of the energy from the sun by surrounding it in solar panels, the 2.3 percent growth rate would come to an end within 1,350 years from now. And if we go further out still, to capture the energy emitted from all the stars in our galaxy, we get that this growth rate must hit the ceiling and become near-zero within 2,500 years (of course, the limit of the physically possible must be hit earlier, indeed more than 500 years earlier, as we cannot traverse our 100,000 light year-wide Milky Way in only 2,500 years).

One may suggest that alternative sources of energy might change this analysis significantly, yet, as Murphy notes, this does not seem to be the case:

Some readers may be bothered by the foregoing focus on solar/stellar energy. If we’re dreaming big, let’s forget the wimpy solar energy constraints and adopt fusion. The abundance of deuterium in ordinary water would allow us to have a seemingly inexhaustible source of energy right here on Earth. We won’t go into a detailed analysis of this path, because we don’t have to. The merciless growth illustrated above means that in 1400 years from now, any source of energy we harness would have to outshine the sun.

Essentially, keeping up the annual growth rate of 2.3 percent by harnessing energy from matter not found in stars would force us to make such matter hotter than stars themselves. We would have to create new stars of sorts, and, even if we assume that the energy required to create such stars is less than the energy gained, such an endeavor would quickly run into limits as well. For according to one estimate, the total mass of the Milky Way, including dark matter, is only 20 times greater than the mass of its stars. Assuming a 5:1 ratio of dark matter to ordinary matter, this implies that that there is only about 3.3 times as much ordinary non-stellar matter as there is stellar matter in our galaxy. Thus, even if we could convert all this matter into stars without spending any energy and harvest the resulting energy, this would only give us about 50 years more of keeping up with the annual growth rate of 2.3 percent.1

Limits derived from economic considerations

Similar conclusions to the ones drawn above for computation and energy also seem to follow from calculations of a more economic nature. For, as economist Robin Hanson has argued, projecting present economic growth rates into the future also leads to a clash against fundamental limits:

Today we have about ten billion people with an average income about twenty times subsistence level, and the world economy doubles roughly every fifteen years. If that growth rate continued for ten thousand years[,] the total growth factor would be 10200.

There are roughly 1057 atoms in our solar system, and about 1070 atoms in our galaxy, which holds most of the mass within a million light years. So even if we had access to all the matter within a million light years, to grow by a factor of 10200each atom would on average have to support an economy equivalent to 10140 people at today’s standard of living, or one person with a standard of living 10140 times higher, or some mix of these.

Indeed, current growth rates would “only” have to continue for three thousand years before each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard, which already seems rather implausible (not least because we can only access a tiny fraction of “all the matter within a million light years” in three thousand years). Hanson does not, however, expect the current growth rate to remain constant, but instead, based on the history of growth rates, expects a new growth mode where the world economy doubles within 15 days rather than 15 years:

If a new growth transition were to be similar to the last few, in terms of the number of doublings and the increase in the growth rate, then the remarkable consistency in the previous transitions allows a remarkably precise prediction. A new growth mode should arise sometime within about the next seven industry mode doublings (i.e., the next seventy years) and give a new wealth doubling time of between seven and sixteen days.

And given this more than a hundred times greater growth rate, the net growth that would take 10,000 years to accomplish given our current growth rate (cf. Hanson’s calculation above) would now take less than a century to reach, while growth otherwise requiring 3,000 years would require less than 30 years. So if Hanson is right, and we will see such a shift within the next seventy years, what seems to follow is that we will reach the limits of economic growth, or at least reach near-zero growth rates, within a century or two. Such a projection is also consistent with the physically derived limits of the continuation of Moore’s law; not that economic growth and Moore’s law are remotely the same, yet they are no doubt closely connected: economic growth is largely powered by technological progress, of which Moore’s law has been a considerable subset in recent times.

The conclusion we reach by projecting past growth trends in computing power, energy, and the economy is the same: our current growth rates cannot go on forever. In fact, they will have to decline to near-zero levels very soon on a cosmic timescale. Given the physical limits to computation, and hence, ultimately, to economic growth, we can conclude that we must be close to the point where peak relative growth in our economy and our ability to process information occurs — that is, the point where this growth rate is the highest in the entire history of our civilization, past and future.

Peak growth might lie in the past

This is not, however, to say that this point of maximum relative growth necessarily lies in the future. Indeed, in light of the declining economic growth rates we have seen over the last few decades, it cannot be ruled out that we are now already past the point of “peak economic growth” in the history of our civilization, with the highest growth rates having occurred around 1960-1980, cf. these declining growth rates and this essay by physicist Theodore Modis. This is not to say that we most likely are, yet it seems that the probability that we are is non-trivial.

A relevant data point here is that the global economy has seen three doublings since 1965, where the annual growth rate was around six percent, and yet the annual growth rate today is only a little over half — around 3 percent — of, and lies stably below, what it was those three doublings ago. In the entire history of economic growth, this seems unprecedented, suggesting that we may already be on the other side of the highest growth rates we will ever see. For up until this point, a three-time doubling of the economy has, rare fluctuations aside, led to an increase in the annual growth rate.

And this “past peak growth” hypothesis looks even stronger if we look at 1955, with a growth rate of a little less than six percent and a world product at 5,430 billion 1990 U.S dollars, which doubled four times gives just under 87,000 billion — about where we should expect today’s world product to be. Yet throughout the history of our economic development, four doublings has meant a clear increase in the annual growth rate, at least in terms of the underlying trend; not a stable decrease of almost 50 percent. This tentatively suggests that we should not expect to see growth rates significantly higher than those of today sustained in the future.

A hypothetical model: roughly symmetric growth rates

If we assume a model of the growth of the global economy where the annual growth rate is roughly symmetrical around the time the growth rate was at its global maximum, and then assume that this global maximum occurred around 1965, this means that we should expect the annual growth rate three doublings earlier, c. 1900, to be the same as the annual growth rate three doublings later, c. 2012. What do we observe? Three doublings earlier it was around 2.5 percent, while it was around 3.5 percent three doublings later, at least according to one source (although other sources actually do put the number at around 2.5 percent). Not a clear match, nor a clear falsification.

Yet if we look at the growth rates of advanced economies around 2012, we find that the growth rate is actually significantly lower than 2.5 percent, namely 1.2-2.0 percent. And given that less developed economies are expected to grow significantly faster than more developed ones, as the more advanced economies have paved the way and made high-hanging fruits more accessible, the (already not so big) 2.5 vs. 3.5 percent mismatch could be due to this gradually diminishing catch-up effect. Indeed, if we compare advanced economies today with advanced economies c. 1900, we find that the growth rate was significantly higher back then,3 suggesting that the symmetrical model may in fact overestimate current and future growth if we look only at advanced economies.4

Could we be past peak growth in science and technology?

That peak growth lies in the past may also be true of technological progress in particular, or at least many forms of technological progress, including the progress in computing power tracked by Moore’s law, where the growth rate appears to have been highest around 1990-2005, and to since have been in decline, cf. this article and the first graphs found here and here. Similarly, various sources of data and proxies tracking the number of scientific articles published and references cited over time also suggest that we could be past peak growth in science as well, at least in many fields when evaluated based on such metrics, with peak growth seeming to have been reached around 2000-2010.

Yet again, these numbers — those tracking economic, technological, and scientific progress — are of course closely connected, as growth in each of these respects contributes to, and is even part of, growth in the others. Indeed, one study found the doubling time of the total number of scientific articles in recent decades to be 15 years, corresponding to an annual growth rate of 4.7 percent, strikingly similar to the growth rate of the global economy in recent decades. Thus, declining growth rates both in our economy, technology, and science cannot be considered wholly independent sources of evidence that growth rates are now declining for good. We can by no means rule out that growth rates might increase in all these areas in the future — although, as we saw above with respect to the limits of Moore’s law and economic progress, such an increase, if it is going to happen, must be imminent if current growth rates remain relatively stable.

Absolute and relative growth

The economic “peak growth” discussed above relates to relative growth, not absolute growth. These are worth distinguishing. For in terms of absolute growth, annual growth is significantly higher today than it was in the 1960s, where the greatest relative growth to date occurred. The global economy grew with about half a trillion 1990 US dollars each year in the sixties, whereas it grows with about two trillion now. So in this absolute sense, we are seeing significantly more growth today than we did 50 years ago, although we now have significantly lower growth rates.

If we assume the model with symmetric growth rates mentioned above and make a simple extrapolation based on it, what follows is that our time is also a special one when it comes to absolute annual growth. The picture we get is the following (based on an estimate of past growth rates from economic historian James DeLong):

Year                             World GDP
(in trillions)           
Annual
growth rate           
Absolute annual
growth (in trillions)
920   0.032      0.13        0.00004
1540   0.065      0.25        0.0002
1750   0.13      0.5        0.0007
1830   0.27      1        0.003
1875  0.55      1.8        0.01
1900   1.1      2.5        0.03
1931   2.3      3.8        0.09
1952   4.6      4.9        0.2
1965   9.1      5.9        0.5
1980   18      4        0.7
1997   36      3.5        1.3
2012   72      3        2.2

 

A graph of the annual growth rate at each doubling over the last thousand years:

20200810101248

Predicted values given roughly symmetric growth rates around 1965 (mirroring growth rates above):

2037 144 1.8 2.6
2082 288 1 2.9
2162 576 0.5 2.9
2372 1152 0.25 2.9
2992 2304 0.13 3.0

 

We see that the absolute annual growth in GDP seems to follow an s-curve with an inflection point right about today, as we see that the period from 1997 to 2012 saw the biggest jump in absolute annual growth in a doubling ever; an increase of 0.7 trillion, from 1.4 to 2.1.

It is worth noting that economist Robert Gordon predicts similar growth rates as the model above over the next few decades, as do various other estimates of the future of economic growth by economists. In contrast, engineer Paul Daugherty and economist Mark Purdy predict higher growth rates due to the effects of AI on the economy, yet the annual growth rates they predict in 2035 are still only around three percent for most of the developed economies they looked at, roughly at the same level as the current growth rate of the global economy. On a related note, economist William Nordhaus has attempted to make an economic analysis of whether we are approaching an economic singularity, in which he concludes, based on various growth models, that we do not appear to be, although he does not rule out that an economic singularity, i.e. significantly faster economic growth, might happen eventually.

Might recent trends make us bias-prone?

How might it be relevant that we may be past peak economic growth at this point? Could it mean that our expectations for the future are likely to be biased? Looking back toward the 1960s might be instructive in this regard. For when we look at our economic history up until the 1960s, it is not so strange that people made many unrealistic predictions about the future around this period. Because not only might it have appeared natural to project the high growth rate at the time to remain constant into the future, which would have led to today’s global GDP being more than twice of what it is; it might also have seemed reasonable to predict the growth rates to keep on rising even further. After all, that was what they had been doing consistently up until that point, so why should it not continue in the following decades, resulting in flying cars and conversing robots by the year 2000? Such expectations were not that unreasonable given the preceding economic trends.

The question is whether we might be similarly overoptimistic about future economic progress today given recent, possibly unique, growth trends, specifically the unprecedented increase in absolute annual growth that we have seen over the past two decades — cf. the increase of 0.7 trillion mentioned above. The same may apply to the trends in scientific and technological progress cited above, where peak growth in many areas appears to have happened in the period 1990-2010, meaning that we could now be at a point where we are disposed to being overoptimistic about further progress.

Yet, again, it is highly uncertain at this point whether growth rates, of the economy in general and of progress in technology and science in particular, will increase again in the future. Future economic growth may not conform well to the model with roughly symmetric growth rates around the 1960s, although the model certainly deserves some weight. All we can say for sure is that growth rates must become near-zero relatively soon. What the path toward that point will look like remains an open question. We could well be in the midst of a temporary decline in growth rates that will be followed by growth rates significantly greater than those of the 1960s, cf. the new growth mode envisioned by Robin Hanson.5

Implications: this is an extremely special time

Applying the mediocrity principle, we should not expect to live in an extremely unique time. Yet, in light of the facts about the ultimate limits to growth seen above, it is clear that we do: we are living during the childhood of civilization where there is still rapid growth, at the pace of doublings within a couple of decades. If civilization persists with similar growth rates, it will soon become a grown-up with near-zero relative growth. And it will then look back at our time — today plus minus a couple of centuries, most likely — as the one where growth rates were by far the highest in its entire history, which may be more than a trillion years.

It seems that a few things follow from this. First, more than just being the time where growth rates are the highest, this may also, for that very reason, be the time where individuals can influence the future of civilization more than any other time. In other words, this may be the time where the outcome of the future is most sensitive to small changes, as it seems plausible, although far from clear, that small changes in the trajectory of civilization are most significant when growth rates are highest. An apt analogy might be a psychedelic balloon with fluctuating patterns on its surface, where the fluctuations that happen to occur when we blow up the balloon will then also be blown up and leave their mark in a way that fluctuations occurring before and after this critical growth period will not (just like quantum fluctuations in the early universe got blown up during cosmic expansion, and thereby in large part determined the grosser structure of the universe today). Similarly, it seems much more difficult to cause changes across all of civilization when it spans countless star systems compared to today.

That being said, it is not obvious that small changes — in our actions, say — are more significant in this period where growth rates are many orders of magnitude higher than in any other time. It could also be that such changes are more consequential when the absolute growth is the highest. Or perhaps when it is smallest, at least as we go backwards in time, as there were far fewer people back when growth rates were orders of magnitude lower than today, and hence any given individual comprised a much greater fraction of all individuals than an individual does today.

Still, we may well find ourselves in a period where we are uniquely positioned to make irreversible changes that will echo down throughout the entire future of civilization.6 To the extent that we are, this should arguably lead us to update toward trying to influence the far future rather than the near future. More than that, if it does hold true that the time where the greatest growth rates occur is indeed the time where small changes are most consequential, this suggests that we should increase our credence in the simulation hypothesis. For if realistic sentient simulations of the past become feasible at some point, the period where the future trajectory of civilization seems the most up for grabs would seem an especially relevant one to simulate and learn more about. However, one can also argue that the sheer historical uniqueness of our current growth rates alone, regardless of whether this is a time where the fate of our civilization is especially volatile, should lead us to increase this credence, as such uniqueness may make it a more interesting time to simulate, and because being in a special time in general should lead us to increase our credence in the simulation hypothesis (see for instance this talk for a case for why being in a special time makes the simulation hypothesis more likely).7

On the other hand, one could also argue that imminent near-zero growth rates, along with the weak indications that we may now be past peak growth in many respects, provide a reason to lower our credence in the simulation hypothesis, as these observations suggest that the ceiling for what will be feasible in the future may be lower than we naively expect in light of today’s high growth rates. And thus, one could argue, it should make us more skeptical of the central premise of the simulation hypothesis: that there will be (many) ancestor simulations in the future. To me, the consideration in favor of increased credence seems stronger, although it does not significantly move my overall credence in the hypothesis, as there are countless other factors to consider.8


Appendix: Questioning our assumptions

Caspar Oesterheld pointed out to me that it might be worth meditating on how confident we can be in these conclusions given that apparently solid predictions concerning the ultimate limits to growth have been made before, yet quite a few of these turned out to be wrong. Should we not be open to the possibility that the same might be true of (at least some of) the limits we reviewed in the beginning of this essay?

Could our understanding of physics be wrong?

One crucial difference to note is that these failed predictions were based on a set of assumptions — e.g. about the amount of natural resources and food that would be available — that seem far more questionable than the assumptions that go into the physics-based predictions we have reviewed here: that our apparently well-established physical laws and measurements indeed are valid, or at least roughly so. The epistemic status of this assumption seems a lot more solid, to put it mildly. So there does seem to be a crucial difference here. This is not to say, however, that we should not maintain some degree of doubt as to whether this assumption is correct (I would argue that we always should). It just seems that this degree of doubt should be quite low.

Yet, to continue the analogy above, what went wrong with the aforementioned predictions was not so much that limits did not exist, but rather that humans found ways of circumventing them through innovation. Could the same perhaps be the case here? Could we perhaps some day find ways of deriving energy from dark energy or some other yet unknown source, even though physicists seem skeptical? Or could we, as Ray Kurzweil speculates, access more matter and energy by finding ways of travelling faster than light, or by finding ways of accessing other parts of our notional multiverse? Might we even become able to create entirely new ones? Or to eventually rewrite the laws of nature as we please? (Perhaps by manipulating our notional simulators?) Again, I do not think any of these possibilities can be ruled out completely. Indeed, some physicists argue that the creation of new pocket universes might be possible, not in spite of “known” physical principles (or rather theories that most physicists seem to believe, such as inflationary theory), but as a consequence of them. However, it is not clear that anything from our world would be able to expand into, or derive anything from, the newly created worlds on any of these models (which of course does not mean that we should not worry about the emergence of such worlds, or the fate of other “worlds” that we perhaps could access).

All in all, the speculative possibilities raised above seem unlikely, yet they cannot be ruled out for sure. The limits we have reviewed here thus represent a best estimate given our current, admittedly incomplete, understanding of the universe in which we find ourselves, not an absolute guarantee. However, it should be noted that this uncertainty cuts both ways, in that the estimates we have reviewed could also overestimate the limits to various forms of growth by countless orders of magnitude.

Might our economic reasoning be wrong?

Less speculatively, I think, one can also question the validity of our considerations about the limits of economic progress. I argued that it seems implausible that we in three thousand years could have an economy so big that each atom in our galaxy would have to support an economy equivalent to a single person living at today’s living standard. Yet could one not argue that the size of the economy need not depend on matter in this direct way, and that it might instead depend on the possible representations that can be instantiated in matter? If economic value could be mediated by the possible permutations of matter, our argument about a single atom’s need to support entire economies might not have the force it appears to have. For instance, there are far more legal positions on a Go board than there are atoms in the visible universe, and that’s just legal positions on a Go board. Perhaps we need to be more careful when thinking about how atoms might be able to create and represent economic value?

It seems like there is a decent point here. Still, I think economic growth at current rates is doomed. First, it seems reasonable to be highly skeptical of the notion that mere potential states could have any real economic value. Today at least, what we value and pay for is not such “permutation potential”, but the actual state of things, which is as true of the digital realm as of the physical. We buy and stream digital files such as songs and movies because of the actual states of these files, while their potential states mean nothing to us. And even when we invest in something we think has great potential, like a start-up, the value we expect to be realized is still ultimately one that derives from its actual state, namely the actual state we hope it will assume, not its number of theoretically possible permutations.

It is not clear why this would change, or how it could. After all, the number of ways one can put all the atoms in the galaxy together is the same today as it will be ten thousand years from now. Organizing all these atoms into a single galactic supercomputer would only seem to increase the value of their actual state.

Second, economic growth still seems tightly constrained by the shackles of physical limitations. For it seems inescapable that economies, of any kind, are ultimately dependent on the transfer of resources, whether these take the form of information or concrete atoms. And such transfers require access to energy, the growth of which we know to be constrained, as is true of the growth of our ability to process information. As these underlying resources that constitute the lifeblood of any economy stop growing, it seems unlikely that the economy can avoid this fate as well. (Tom Murphy touches on similar questions in his analysis of the limits to economic growth.)

Again, we of course cannot exclude that something crucial might be missing from these considerations. Yet the conclusion that economic growth rates will decline to near-zero levels relatively soon, on a cosmic timescale at least, still seems a safe bet in my view.

Acknowledgments

I would like to thank Brian Tomasik, Caspar Oesterheld, Duncan Wilson, Kaj Sotala, Lukas Gloor, Magnus Dam, Max Daniel, and Tobias Baumann for valuable comments and inputs. This essay was originally published at the website of the Foundational Research Institute, now the Center on Long-Term Risk. 


Notes

1. One may wonder whether there might not be more efficient ways to derive energy from the non-stellar matter in our galaxy than to convert it into stars as we know them. I don’t know, yet a friend of mine who does research in plasma physics and fusion says that he does not think one could, especially if we, as we have done here, disregard the energy required to clump the dispersed matter together so as to “build” the star, a process that may well take more energy than the star can eventually deliver.

The aforementioned paper by Lawrence Krauss and Glenn Starkman also contains much information about the limits of energy use, and in fact uses accessible energy as the limiting factor that bounds the amount of information processing any (local) civilization could do (they assume that the energy that is harvested is beamed back to a “central observer”).

2. And I suspect many people who have read about “singularity”-related ideas are overconfident, perhaps in part due to the comforting narrative and self-assured style of Ray Kurzweil, and perhaps due to wishful thinking about technological progress more generally.

3. According to one textbook “Outside the European world, per capita incomes stayed virtually constant from 1700 to about 1950 […]” implying that the global growth rate in 1900 was raised by the most developed economies, and they must thus have had a growth rate greater than 2.5 percent.

4. A big problem with this model is that it is already pretty much falsified by the data, at least when it comes to “pretty”, as opposed to approximate, symmetry. For given symmetry in the growth rates around 1965, the time it takes for three doublings to occur should be the same in either direction, whereas the data shows that this is not the case — 65 years minus 47 years equals 18 years, which is roughly a doubling. One may be able to correct this discrepancy a tiny bit by moving the year of peak growth a bit further back, yet this cannot save the model. This lack of actual symmetry should reduce our credence in the symmetric model as a description of the underlying pattern of our economic growth, yet I do not think it fully discredits it. Rough symmetry still seems a decent first approximation to past growth rates, and deviations may in part be explainable by factors such as the high, yet relatively fast diminishing, contribution to growth from developing economies.

5. It should be noted, though, that Hanson by no means rules out that such a growth mode may never occur, and that we might already be past, or in the midst of, peak economic growth: “[…] it is certainly possible that the economy is approaching fundamental limits to economic growth rates or levels, so that no faster modes are possible […]”

6. The degree to which there is sensitivity to changes of course varies between different endeavors. For instance, natural science seems more convergent than moral philosophy, and thus its development is arguably less sensitive to the particular ideas of individuals working on it than the development of moral philosophy is.

7. One may then argue that this should lead us to update toward focusing more on the near future. This may be true. Yet should we update more toward focusing on the far future given our ostensibly unique position to influence it? Or should we update more toward focusing on the near future given increased credence in the simulation hypothesis? (Provided that we indeed do increase this credence, cf. the counter-consideration above.) In short, it mostly depends on the specific probabilities we assign to these possibilities. I myself happen to think the far future should dominate, as I assign the simulation hypothesis (as commonly conceived) a very small probability.

8. For instance, fundamental epistemological issues concerning how much one can infer based on impressions from a simulated world (which may only be your single mind) about a simulating one (e.g. do notions such as “time” and “memory” correspond to anything, or even make sense, in such a “world”?); the fact that the past cannot be simulated realistically, since we can only have incomplete information about a given physical state in the past (not only because we have no way to uncover all the relevant information, but also because we cannot possibly represent it all, even if we somehow could access it — for instance, we cannot faithfully represent the state of every atom in our solar system in any point in the past, as this would require too much information), and a simulation of the past that contains incomplete information would depart radically from how the actual past unfolded, as all of it has a non-negligible causal impact (even single photons, which, it appears, are detectable by the human eye), and this is especially true given that the vast majority of information would have to be excluded (both due to practical constraints to what can be recovered and what can be represented); whether conscious minds can exist on different levels of abstraction; etc.

Blog at WordPress.com.

Up ↑