First published: Dec. 2017. Last update: Nov. 2019.
It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard take-off scenarios compared to the attention paid to counter-arguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard take-off scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments — not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays, as well as other resources which present reasons to be skeptical of a hard take-off scenario.
I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least to overlook other things that may be more important to focus on.
I should note that I do not necessarily agree with all claims and arguments made in these various resources myself, yet I do think all the articles make at least some good points. I should also note that not all the following authors rule out the possibility of a hard take-off, as opposed to merely considering other scenarios more likely.
Robin Hanson:
The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)
Yudkowsky vs Hanson — Singularity Debate (Youtube video)
Hanson on intelligence explosion, from Age of Em
Brains Simpler Than Brain Cells?
Foom Justifies AI Risk Efforts Now
How Deviant Recent AI Progress Lumpiness?
Robin Hanson on AI Skepticism (Youtube video)
Robin Hanson on AI Takeoff Scenarios – AI Go Foom? (Youtube video)
Decentralized Approaches to AI Presentations (Youtube video; also features Eric Drexler and Mark Miller)
Conversation with Robin Hanson
David Pearce:
The Biointelligence Explosion (Extended Abstract)
Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement?
Ramez Naam:
Top Five Reasons ‘The Singularity’ Is A Misnomer
The Singularity is Further Than it Appears
Why AIs Won’t Ascend in the Blink of an Eye – Some Math
Theodore Modis:
Why the Singularity Cannot Happen
Brian Tomasik:
Artificial Intelligence and Its Implications for Future Suffering
Artificial general intelligence as a continuation of existing software and societal trends
J Storrs Hall:
Self-improving AI: an Analysis
Monica Anderson:
Max More:
Rodney Brooks:
The Seven Deadly Sins of Predicting the Future of AI
Eric Drexler:
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Reframing Superintelligence (Youtube Video)
Richard Loosemore:
The Maverick Nanny with a Dopamine Drip
Ernest Davis:
Ethical Guidelines for A Superintelligence
“Fods12”:
Critique of Superintelligence (link to first post in a five-part series)
John Brockman (editor):
Possible Minds: Twenty-Five Ways of Looking at AI
Martin Ford:
Architects of Intelligence: The truth about AI from the people building it
AI Impacts:
Likelihood of discontinuous progress around the development of AGI
Philippe Aghion, Benjamin F. Jones, & Charles I. Jones:
Artificial Intelligence and Economic Growth
Ben Garfinkel:
How sure are we about this AI stuff?
Sebastian Benthall:
Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument
Paul Christiano:
Scott Aaronson:
Jeff Hawkins:
The Future of Artificial Intelligence – Up Next (Youtube video)
The Terminator Is Not Coming. The Future Will Thank Us.
Ben Goertzel:
The Singularity Institute’s Scary Idea (and Why I Don’t Buy It)
Superintelligence: fears, promises, and potentials
Nicholas Agar:
Don’t Worry about Superintelligence
Steven Pinker:
We’re told to fear robots. But why do we think they’ll turn on us?
MIT AGI: AI in the Age of Reason (Steven Pinker) (Youtube video)
Maciej Cegłowski:
Superintelligence: The Idea That Eats Smart People
Timothy B. Lee:
Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.
Neil Lawrence:
Future of AI 6. Discussion of ‘Superintelligence: Paths, Dangers, Strategies’
François Chollet:
The impossibility of intelligence explosion
Kevin Kelly:
Paul Allen:
Alexander Kruel:
AI Risk Critiques: Index (links to many articles)
Alessio Plebe & Pietro Perconti:
The slowdown hypothesis (extended abstract)
Tim Tyler:
The Intelligence Explosion Is Happening Now
Magnus Vinding:
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique
Short summary and review of Reflections on Intelligence by Kaj Sotala:
Disjunctive AI scenarios: Individual or collective takeoff?
Various:
Tech Luminaries Address Singularity
Perspectives on intelligence explosion
Not directly about the subject, but still relevant to read, in my opinion: