Reprint
Artificial Superintelligence
Coordination & Strategy
Edited by
April 2020
206 pages
- ISBN978-3-03921-855-4 (Hardback)
- ISBN978-3-03921-854-7 (PDF)
This is a Reprint of the Special Issue Artificial Superintelligence: Coordination & Strategy that was published in
Computer Science & Mathematics
Summary
Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility.
Formats
- Hardback
License and Copyright
© 2020 by the authors; CC BY-NC-ND license
Keywords
AI welfare science; AI welfare policies; sentiocentrism; antispeciesism; AI safety; value sensitive design; VSD; design for values; safe for design; AI; ethics; AI safety; existential risk; AI alignment; superintelligence; AI arms race; multi-agent systems; specification gaming; artificial intelligence safety; Goodhart’s Law; machine learning; moral and ethical behavior; artilects; supermorality; superintelligence; policymaking process; AI risk; typologies of AI policy; AI governance; autonomous distributed system; conflict; existential risk; distributed goals management; terraforming; technological singularity; AI forecasting; technology forecasting; scenario analysis; scenario mapping; transformative AI; scenario network mapping; judgmental distillation mapping; holistic forecasting framework; artificial general intelligence; AGI; blockchain; distributed ledger; AI containment; AI safety; AI value alignment; ASILOMAR; future-ready; strategic oversight; artificial superintelligence; artificial intelligence; forecasting AI behavior; predictive optimization; simulations; Bayesian networks; adaptive learning systems; pedagogical motif; explainable AI; AI Thinking; human-in-the-loop; human-centric reasoning; policy making on AI