A Contra AI FOOM Reading List

First published: Dec. 2017. Last update: Aug. 2020.

It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard take-off scenarios compared to the attention paid to counter-arguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard take-off scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments — not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays, as well as other resources which present reasons to be skeptical of a hard take-off scenario.

I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least to overlook other things that may be more important to focus on.

I should note that I do not necessarily agree with all claims and arguments made in these various resources myself, yet I do think all the articles make at least some good points. I should also note that not all the following authors rule out the possibility of a hard take-off, as opposed to merely considering other scenarios more likely.

Robin Hanson:

Some Skepticism

The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)

Yudkowsky vs Hanson — Singularity Debate (video)

Setting The Stage

AI Go Foom

Emulations Go Foom

Wrapping Up

Two Visions Of Heritage

Distrusting Drama

What Core Argument?

Is The City-ularity Near?

The Betterness Explosion

Meet the new conflict, same as the old conflict: Comment on David Chalmers “The Singularity: A Philosophical Analysis”

Debating Yudkowsky

When Is “Soon”?

A History Of Foom

Foom Debate, Again

I Still Don’t Get Foom

Irreducible Detail

This Time Isn’t Different

How Different AGI Software?

Hanson on intelligence explosion, from Age of Em

Brains Simpler Than Brain Cells?

This AI Boom Will Also Bust

Foom Justifies AI Risk Efforts Now

How Deviant Recent AI Progress Lumpiness?

Robin Hanson on AI Skepticism (video)

How Lumpy AI Services?

Robin Hanson on AI Takeoff Scenarios – AI Go Foom? (video) 

Decentralized Approaches to AI Presentations (video; also features Eric Drexler and Mark Miller)

Conversation with Robin Hanson

David Pearce:

The Biointelligence Explosion (Extended Abstract)

Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?

Ramez Naam:

Top Five Reasons ‘The Singularity’ Is A Misnomer

The Singularity is Further Than it Appears

Why AIs Won’t Ascend in the Blink of an Eye – Some Math

Theodore Modis:

Why the Singularity Cannot Happen

Brian Tomasik:

Artificial Intelligence and Its Implications for Future Suffering

Artificial general intelligence as a continuation of existing software and societal trends

J Storrs Hall:

Engineering Utopia

Self-improving AI: an Analysis

Monica Anderson:

Problem Solved: Unfriendly AI

Reduction Considered Harmful

Max More:

Singularity Meets Economy

Rodney Brooks:

The Seven Deadly Sins of Predicting the Future of AI

Eric Drexler:

Reframing Superintelligence: Comprehensive AI Services as General Intelligence

Reframing Superintelligence (video)

Richard Loosemore:

The Maverick Nanny with a Dopamine Drip

Ernest Davis:

Ethical Guidelines for A Superintelligence


Critique of Superintelligence (link to first post in a five-part series)

John Brockman (editor):

Possible Minds: Twenty-Five Ways of Looking at AI

Martin Ford:

Architects of Intelligence: The truth about AI from the people building it

AI Impacts:

Likelihood of discontinuous progress around the development of AGI

Philippe Aghion, Benjamin F. Jones, & Charles I. Jones:

Artificial Intelligence and Economic Growth

Ben Garfinkel:

How sure are we about this AI stuff? (video)

Ben Garfinkel on scrutinising classic AI risk arguments

Does economic history point toward a singularity?

Sebastian Benthall:

Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument

Paul Christiano:

Takeoff speeds

Scott Aaronson:

The Singularity Is Far

Jeff Hawkins:

The Future of Artificial Intelligence – Up Next (video)

The Terminator Is Not Coming. The Future Will Thank Us.

Ben Goertzel:

The Singularity Institute’s Scary Idea (and Why I Don’t Buy It)

Superintelligence: fears, promises, and potentials

Nicholas Agar:

Don’t Worry about Superintelligence

Steven Pinker:

We’re told to fear robots. But why do we think they’ll turn on us?

MIT AGI: AI in the Age of Reason (Steven Pinker) (video)

Maciej Cegłowski:

Superintelligence: The Idea That Eats Smart People

Timothy B. Lee:

Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.

Neil Lawrence:

Future of AI 6. Discussion of ‘Superintelligence: Paths, Dangers, Strategies’

François Chollet:

The impossibility of intelligence explosion

Kevin Kelly:

The Myth of a Superhuman AI

Paul Allen:

The Singularity Isn’t Near

Alexander Kruel:

AI Risk Critiques: Index (links to many articles)

Alessio Plebe & Pietro Perconti:

The slowdown hypothesis (extended abstract)

Tim Tyler:

The Intelligence Explosion Is Happening Now

The Singularity Is Nonsense

Against the Singularity

Magnus Vinding:

Reflections on Intelligence

Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique

Chimps, Humans, and AI: A Deceptive Analogy

Short summary and review of Reflections on Intelligence by Kaj Sotala:

Disjunctive AI scenarios: Individual or collective takeoff?


Tech Luminaries Address Singularity

Perspectives on intelligence explosion

Not directly about the subject, but still relevant to read, in my opinion:

The Knowledge Illusion: Why We Never Think Alone

The Ascent of Man

The Evolution of Everything

The Secret of Our Success

Intellectuals and Society

The future of growth: near-zero growth rates

Comments are closed.

Blog at WordPress.com.

Up ↑

%d bloggers like this: