First published: Dec. 2017. Last update: Jun. 2022.
It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard take-off scenarios compared to the attention paid to counter-arguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard take-off scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments — not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays, as well as other resources that present reasons to be skeptical of a hard take-off scenario.
I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least to overlook other things that may be more important to focus on.
I should note that I do not necessarily agree with all claims and arguments made in the resources below; for example, Steven Pinker has made some bad arguments that make FOOM skepticism seem ignorant. Yet I do think that all each of the entries below make at least some good points. I should also note that not all the following authors rule out the possibility of a hard take-off, as opposed to merely considering other scenarios more likely.
The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)
Yudkowsky vs Hanson — Singularity Debate (video)
Setting The Stage
AI Go Foom
Emulations Go Foom
Two Visions Of Heritage
What Core Argument?
Is The City-ularity Near?
The Betterness Explosion
Meet the new conflict, same as the old conflict: Comment on David Chalmers “The Singularity: A Philosophical Analysis”
When Is “Soon”?
A History Of Foom
Foom Debate, Again
I Still Don’t Get Foom
This Time Isn’t Different
How Different AGI Software?
Hanson on intelligence explosion, from Age of Em
Brains Simpler Than Brain Cells?
This AI Boom Will Also Bust
Foom Justifies AI Risk Efforts Now
How Deviant Recent AI Progress Lumpiness?
Robin Hanson on AI Skepticism (video)
How Lumpy AI Services?
Robin Hanson on AI Takeoff Scenarios – AI Go Foom? (video)
Decentralized Approaches to AI Presentations (video; also features Eric Drexler and Mark Miller)
Conversation with Robin Hanson
Why Not Wait On AI Risk?
Top Five Reasons ‘The Singularity’ Is A Misnomer
The Singularity is Further Than it Appears
Why AIs Won’t Ascend in the Blink of an Eye – Some Math
How sure are we about this AI stuff? (video)
Ben Garfinkel on scrutinising classic AI risk arguments
Does economic history point toward a singularity?
The Intelligence Explosion Is Happening Now
The Singularity Is Nonsense
Against the Singularity
We’re told to fear robots. But why do we think they’ll turn on us?
MIT AGI: AI in the Age of Reason (Steven Pinker) (video)
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Reframing Superintelligence (video)
Artificial Intelligence and Its Implications for Future Suffering
Artificial general intelligence as a continuation of existing software and societal trends
Problem Solved: Unfriendly AI
Reduction Considered Harmful
The Future of Artificial Intelligence – Up Next (video)
The Terminator Is Not Coming. The Future Will Thank Us.
The Singularity Institute’s Scary Idea (and Why I Don’t Buy It)
Superintelligence: fears, promises, and potentials
The Biointelligence Explosion (Extended Abstract)
Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?
Why the Singularity Cannot Happen
The impossibility of intelligence explosion
The Seven Deadly Sins of Predicting the Future of AI
J Storrs Hall:
Self-improving AI: an Analysis
Philippe Aghion, Benjamin F. Jones, & Charles I. Jones:
Artificial Intelligence and Economic Growth
Alessio Plebe & Pietro Perconti:
The slowdown hypothesis (extended abstract)
Singularity Meets Economy
The Maverick Nanny with a Dopamine Drip
Ethical Guidelines for A Superintelligence
Critique of Superintelligence (link to first post in a five-part series)
Likelihood of discontinuous progress around the development of AGI
Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument
The Singularity Is Far
Don’t Worry about Superintelligence
Superintelligence: The Idea That Eats Smart People
Timothy B. Lee:
Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.
Future of AI 6. Discussion of ‘Superintelligence: Paths, Dangers, Strategies’
The Myth of a Superhuman AI
The Singularity Isn’t Near
AI Risk Critiques: Index (links to many articles)
John Brockman (editor):
Possible Minds: Twenty-Five Ways of Looking at AI
Architects of Intelligence: The truth about AI from the people building it
Reflections on Intelligence
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique
Chimps, Humans, and AI: A Deceptive Analogy
Two biases relevant to expected AI scenarios
Some reasons not to expect a growth explosion
Short summary and review of Reflections on Intelligence by Kaj Sotala:
Disjunctive AI scenarios: Individual or collective takeoff?
Tech Luminaries Address Singularity
Perspectives on intelligence explosion
Not directly about the subject, but still relevant to read, in my opinion:
The Knowledge Illusion: Why We Never Think Alone
The Ascent of Man
The Evolution of Everything
The Secret of Our Success
Intellectuals and Society
The future of growth: near-zero growth rates