Can it be taken for granted that humans will remain in control in a situation where a breakthrough in artificial intelligence (AI) has led to our no longer being the foremost creatures on our planet in terms of general intelligence? This question lies at the heart of arguments put forth in recent years by philosopher Nick Bostrom, computer scientist Stuart Russell, physicist Max Tegmark and others—arguments that raise dire concerns about such scenarios. Others claim that such concerns are a useless (or even dangerous) distraction. I will attempt a cool-headed and balanced evaluation of whether apocalyptic AI scenarios are worth paying attention to.
© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).