First published: Dec. 2017. Last update: May 2023.
It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard takeoff scenarios compared to the attention paid to counterarguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard takeoff scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments — not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays, as well as other resources that present reasons to be skeptical of a hard takeoff scenario.
I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least overlooking other things that may be more important to focus on.
I should note that I do not necessarily agree with all claims and arguments made in the resources below. For example, Steven Pinker has made some bad arguments that make FOOM skepticism seem ignorant. Yet I still think that each of the entries below make at least some good points. I should also note that not all the following authors consider a hard takeoff to be extremely unlikely, as opposed to merely considering other scenarios more likely.
Robin Hanson:
The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)
Yudkowsky vs Hanson — Singularity Debate (video)
Hanson on intelligence explosion, from Age of Em
Brains Simpler Than Brain Cells?
Foom Justifies AI Risk Efforts Now
How Deviant Recent AI Progress Lumpiness?
Robin Hanson on AI Skepticism (video)
Robin Hanson on AI Takeoff Scenarios – AI Go Foom? (video)
Decentralized Approaches to AI Presentations (video; also features Eric Drexler and Mark Miller)
Conversation with Robin Hanson
Robin Hanson on Predicting the Future of Artificial Intelligence (video)
Waiting for the Betterness Explosion (video)
The Optimistic Hansonian Singularity (podcast)
Ramez Naam:
Top Five Reasons ‘The Singularity’ Is A Misnomer
The Singularity is Further Than it Appears
Why AIs Won’t Ascend in the Blink of an Eye – Some Math
Ben Garfinkel:
How sure are we about this AI stuff? (video)
Ben Garfinkel on scrutinising classic AI risk arguments
Does economic history point toward a singularity?
Tim Tyler:
The Intelligence Explosion Is Happening Now
Steven Pinker:
We’re told to fear robots. But why do we think they’ll turn on us?
MIT AGI: AI in the Age of Reason (Steven Pinker) (video)
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Eric Drexler:
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Reframing Superintelligence (video)
Brian Tomasik:
Artificial Intelligence and Its Implications for Future Suffering
Artificial general intelligence as a continuation of existing software and societal trends
Jacob Buckman:
Recursively Self-Improving AI Is Already Here
We Aren’t Close To Creating A Rapidly Self-Improving AI
Jeff Hawkins:
The Future of Artificial Intelligence – Up Next (video)
The Terminator Is Not Coming. The Future Will Thank Us.
Ben Goertzel:
The Singularity Institute’s Scary Idea (and Why I Don’t Buy It)
Superintelligence: fears, promises, and potentials
David Pearce:
The Biointelligence Explosion (Extended Abstract)
Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?
J Storrs Hall:
Self-improving AI: an Analysis
Monica Anderson:
James Fodor:
Critique of Superintelligence (link to first post in a five-part series)
A Critique of AI Takeover Scenarios
Theodore Modis:
Why the Singularity Cannot Happen
François Chollet:
The implausibility of intelligence explosion
Rodney Brooks:
The Seven Deadly Sins of Predicting the Future of AI
Katja Grace:
Counterarguments to the basic AI x-risk case
Philippe Aghion, Benjamin F. Jones, & Charles I. Jones:
Artificial Intelligence and Economic Growth
Alessio Plebe & Pietro Perconti:
The slowdown hypothesis (extended abstract)
Max More:
Richard Loosemore:
The Maverick Nanny with a Dopamine Drip
Ernest Davis:
Ethical Guidelines for A Superintelligence
AI Impacts:
Likelihood of discontinuous progress around the development of AGI
Sebastian Benthall:
Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument
Paul Christiano:
Scott Aaronson:
Nicholas Agar:
Don’t Worry about Superintelligence
Maciej Cegłowski:
Superintelligence: The Idea That Eats Smart People
Timothy B. Lee:
Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.
Neil Lawrence:
Future of AI 6. Discussion of ‘Superintelligence: Paths, Dangers, Strategies’
Kevin Kelly:
Paul Allen:
Alexander Kruel:
AI Risk Critiques: Index (links to many articles)
John Brockman (editor):
Possible Minds: Twenty-Five Ways of Looking at AI
Martin Ford:
Architects of Intelligence: The truth about AI from the people building it
Magnus Vinding (me):
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique
Chimps, Humans, and AI: A Deceptive Analogy
Some reasons not to expect a growth explosion
Two contrasting models of “intelligence” and future growth
Short summary and review of Reflections on Intelligence by Kaj Sotala:
Disjunctive AI scenarios: Individual or collective takeoff?
Various:
Tech Luminaries Address Singularity
Perspectives on intelligence explosion
Not directly about the subject but still relevant, in my opinion: