Next Article in Journal
A Model of Complexity for the Legal Domain
Previous Article in Journal
A Nonlinear Energy Harvesting with Asymmetry Compensation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Abstract

Should Apocalyptic AI Scenarios Be Taken Seriously? †

Department of Mathematical Sciences, Chalmers University of Technology, 412 58 Göteborg, Sweden
Presented at the IS4SI 2017 Summit DIGITALISATION FOR A SUSTAINABLE SOCIETY, Gothenburg, Sweden, 12–16 June 2017.
Proceedings 2017, 1(3), 161; https://doi.org/10.3390/IS4SI-2017-04083
Published: 9 June 2017
Can it be taken for granted that humans will remain in control in a situation where a breakthrough in artificial intelligence (AI) has led to our no longer being the foremost creatures on our planet in terms of general intelligence? This question lies at the heart of arguments put forth in recent years by philosopher Nick Bostrom, computer scientist Stuart Russell, physicist Max Tegmark and others—arguments that raise dire concerns about such scenarios. Others claim that such concerns are a useless (or even dangerous) distraction. I will attempt a cool-headed and balanced evaluation of whether apocalyptic AI scenarios are worth paying attention to.

Share and Cite

MDPI and ACS Style

Häggström, O. Should Apocalyptic AI Scenarios Be Taken Seriously? Proceedings 2017, 1, 161. https://doi.org/10.3390/IS4SI-2017-04083

AMA Style

Häggström O. Should Apocalyptic AI Scenarios Be Taken Seriously? Proceedings. 2017; 1(3):161. https://doi.org/10.3390/IS4SI-2017-04083

Chicago/Turabian Style

Häggström, Olle. 2017. "Should Apocalyptic AI Scenarios Be Taken Seriously?" Proceedings 1, no. 3: 161. https://doi.org/10.3390/IS4SI-2017-04083

Article Metrics

Back to TopTop