It seems to me that there is a great asymmetry in the attention devoted to arguments in favor of the plausibility of artificial intelligence FOOM/hard take-off scenarios compared to the attention paid to counter-arguments. This is not so strange given that there are widely publicized full-length books emphasizing the arguments in favor, such as Nick Bostrom’s Superintelligence and James Barrat’s Our Final Invention, while there seems to be no such book emphasizing the opposite. And people who are skeptical of hard take-off scenarios, and who think other things are more important to focus on, will of course tend to write books on those other, in their view more important things. Consequently, they devote only an essay or a few blogposts to present their arguments; not full-length books. The purpose of this reading list is to try to correct this asymmetry a bit by pointing people toward some of these blogposts and essays.
I think it is important to get these arguments out there, as it seems to me that we otherwise risk having a too one-sided view of this issue, and not least to overlook other things that may be more important to focus on.
I should note that I do not necessarily agree with all claims and arguments made in these various resources myself, yet I do think all the articles make at least some good points. I should also note that not all the following authors rule out the possibility of a hard take-off, as opposed to merely considering other scenarios more likely.
The Hanson-Yudkowsky AI-Foom Debate (co-authored with Eliezer Yudkowsky)
Yudkowsky vs Hanson — Singularity Debate (Youtube video)
Timothy B. Lee:
AI Risk Critiques: Index (links to many articles)
My own book on the subject:
Short summary and review of my book by Kaj Sotala:
Not directly about the subject, but still relevant books to read in my opinion: