Ethical Framework for Designing Autonomous Intelligent Systems
Abstract
:1. Introduction
2. Attempts to Approach Ethical Issues in Design
3. A framework to Discuss and Analyze Ethical Issues
3.1. Identification of Ethical Principles and Values Affected by AIS
3.2. Identification of Context-Specific Ethical Values
- Embody the highest ideas of human rights.
- Prioritize the maximum benefit to humanity and the natural environment.
- Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.
- Identify the norms and values of a specific community affected by AIS.
- Implement the norms and values of that community within AIS.
- Evaluate the alignment and compatibility of those norms and values between the humans and AIS within that community.
- Ethical Purpose: Ensuring respect for fundamental rights, principles and values when developing, deploying and using AI.
- Realization of Trustworthy AI: Ensuring implementation of ethical purpose, as well as technical robustness when developing, deploying and using AI.
- Requirements for Trustworthy AI: To be continuously evaluated, addressed and assessed in the design and use through technical and non-technical methods
3.3. Analysis and Understanding of Ethical Issues within the Context
- How likely is it that the expected artifact will promote the expected values?
- To what extent are the promised values desirable for society?
- How likely is it that technology will instrumentally bring about a desirable consequence?
- Usage situation: Transport passengers between two pre-defined points across a river as a part of city public transportation; journey time—20 min.
- Design goals: (1) Enable a reliable, frequent service during operation hours; (2) reduce costs of public transport service and/or enable crossing in a location where a bridge can’t be used; and (3) increase the safety of passengers.
- Operational model: Guide passengers on-board using relevant automatic barriers, signage, and voice announcements; close the ramp when all passengers are on board; autonomously plan the route, considering other traffic and obstacles; make departure decision according to environmental conditions and technical systems status; detach from dock; cross the river, avoiding crossing traffic and obstacles; attach to opposite dock; open ramp, allow disembarkation of passengers; batteries are charged when docked; maintenance operations carried out during night when there is no service; remote operator monitors the operation in a Shore Control Center (SCC), with the possibility to intervene if needed.
- Stakeholders: Remote operator: In an SCC, with access to data provided by ship sensors. Monitors 3 similar vessels simultaneously; passengers (ticket needed to enter the boarding area), max 100 passengers per crossing; maintenance personnel; crossing boat traffic on the route; bystanders on the shore (not allowed to enter the boarding area); people living/having recreational cottages nearby; ship owner/service provider; shipbuilder, insurance company, classification society, traffic authorities.
- Environment: A river within a European city (EU regulations applicable); crossing traffic on the river; varying weather conditions (river does not freeze, but storms/snow etc. can be expected.
4. Discussion
Author Contributions
Funding
Conflicts of Interest
References
- NFA Norwegian Society of Automatic Control. Autonomous Systems: Opportunities, and Challenges for the Oil & Gas Industry; NFA: Kristiansand, Norway, 2012. [Google Scholar]
- Montewka, J.; Wrobel, K.; Heikkila, E.; Valdez-Banda, O.; Goerlandt, F.; Haugen, S. Probabilistic Safety Assessment and Management. In Proceedings of the PSAM 14, Los Angeles, CA, USA, 16–21 September 2018. [Google Scholar]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
- IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design, Version One, A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. 2016. Available online: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v1.pdf? (accessed on 7 March 2019).
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design, Version 2 for Public Discussion. A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems. 2017. Available online: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf (accessed on 7 March 2019).
- Brynjolfsson, E.; Mcafee, A. The Business of Artificial Intelligence. What it Can—And Cannot—Do for your Organization. 2017. Available online: http://asiandatascience.com/wp-content/uploads/2017/12/Big-Idea_Artificial-Intelligence-For-Real_The-AI-World-Confernece-Expo-Decembe-11_13-2017.pdf (accessed on 11 March 2019).
- Floridi, L. (Ed.) Information and Computer Ethics; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Dignum, V. Ethics in artificial intelligence: Introduction to the special issue. Ethics Inf. Technol. 2018, 20, 1–3. [Google Scholar] [CrossRef]
- Anderson, M.; Anderson, S. (Eds.) Machine Ethics; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
- Müller, V.C. (Ed.) Risks of Artificial Intelligence; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2016. [Google Scholar]
- Kitchin, R.; Dodge, M. Code/Space: Software and Everyday Life; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
- Lucivero, F. Ethical Assessments of Emerging Technologies: Appraising the Moral Plausibility of Technological Visions; The International Library of Ethics, Law and Technology; Springer: Heidelberg, Germany, 2016; Volume 15. [Google Scholar]
- Anderson, M.; Anderson, S. The status of Machine Ethics: A Report from the AAAI Symposium. Minds Mach. 2007, 17, 1–10. [Google Scholar] [CrossRef]
- Bynum, T. A Very Short History of Computer Ethics. 2000. Available online: http://www.cs.utexas.edu/~ear/cs349/Bynum_Short_History.html (accessed on 11 March 2019).
- Pierce, M.; Henry, J. Computer ethics: The role of personal, informal, and formal codes. J. Bus. Ethics 1996, 15, 425–437. [Google Scholar] [CrossRef]
- Veruccio, G. The birth of roboethics. In Proceedings of the ICRA 2005, IEEE International Conference on Robotics and Automation, Workshop on Robo-Ethics, Barcelona, Spain, 8 April 2005. [Google Scholar]
- Anderson, S. The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In Machine Ethics; Anderson, M., Anderson, S., Eds.; Oxford University Press: New York, NY, USA, 2011. [Google Scholar]
- Powers, T. Prospects for a Kantian Machine. In Machine Ethics; Anderson, M., Anderson, S., Eds.; Oxford University Press: New York, NY, USA, 2011; pp. 464–475. [Google Scholar]
- Anderson, M.; Anderson, S. Creating an ethical intelligent agent. AI Mag. 2007, 28, 15. [Google Scholar]
- Crawford, K. Artificial Intelligence’s White Guy Problem. The New York Times. 25 June 2016. Available online: https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html (accessed on 20 January 2019).
- Kirchner, J.; Angwin, S.; Mattu, J.; Larson, L. Machine Bias: There’s Software Used across the Country to Predict Future Criminals, and It’s Biased against Blacks; Pro Publica: New York, NY, USA, 2016. [Google Scholar]
- Friedman, B.; Kahn, P.H., Jr. Human values, ethics, and design. In The Human-Computer Interaction Handbook, Fundamentals, Evolving Technologies and Emerging Applications; Jacko, J.A., Sears, A., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2003; pp. 1177–1201. [Google Scholar]
- Friedman, B.; Kahn, P.H., Jr.; Borning, A. Value sensitive design and information systems. In Human-Computer Interaction in Management Information Systems: Applications; M.E. Sharpe, Inc.: New York, NY, USA, 2006; Volume 6, pp. 348–372. [Google Scholar]
- Leikas, J. Life-Based Design—A Holistic Approach to Designing Human-Technology Interaction; VTT Publications: Helsinki, Fenland, 2009; p. 726. [Google Scholar]
- Saariluoma, P.; Cañas, J.J.; Leikas, J. Designing for Life—A Human Perspective on Technology Development; Palgrave MacMillan: London, UK, 2016. [Google Scholar]
- Von Schomberg, R.A. Vision of Responsible Research and Innovation. In Responsible Innovation; Owen, R., Bessant, J., Heintz, M., Eds.; Wiley: Oxford, UK, 2013; pp. 51–74. [Google Scholar]
- European Commission. Options for Strengthening Responsible Research and Innovation, 2013. Available online: https://ec.europa.eu/research/science-society/document_library/pdf_06/options-for-strengthening_en.pdf (accessed on 20 January 2019).
- European Commission. Responsible Research and Innovation—Europe’s Ability to Respond to Societal Challenges, 2012. Available online: http://www.scientix.eu/resources/details?resourceId=4441 (accessed on 20 January 2019).
- Porcari, A.; Borsella, E.; Mantovani, E. (Eds.) Responsible-Industry: Executive Brief, Implementing Responsible Research and Innovation in ICT for an Ageing Society; Italian Association for Industrial Research: Rome, Italy, 2015; Available online: http://www.responsible-industry.eu/ (accessed on 20 January 2019).
- Jonsen, A.R.; Toulmin, S. The Abuse of Casuistry: A History of Moral Reasoning; University of California Press: Berkeley, CA, USA, 1988. [Google Scholar]
- Kuczewski, M. Casuistry and principlism: The convergence of method in biomedical ethics. Theor. Med. Bioethics 1998, 19, 509–524. [Google Scholar] [CrossRef]
- Beauchamp, T.; Childress, J.F. Principles of Biomedical Ethics, 5th ed.; Oxford University Press: Oxford, UK; New York, NY, USA, 2001. [Google Scholar]
- Mazzucelli, C.; Visvizi, A. Querying the ethics of data collection as a community of research and practice the movement toward the “Liberalism of Fear” to protect the vulnerable. Genocide Stud. Prev. 2017, 11, 4. [Google Scholar] [CrossRef]
- Visvizi, A.; Mazzucelli, C.; Lytras, M. Irregular migratory flows: Towards an ICTs’ enabled integrated framework for resilient urban systems. J. Sci. Technol. Policy Manag. 2017, 8, 227–242. [Google Scholar] [CrossRef]
- Riegel, J. Confucius. In The Stanford Encyclopedia of Philosophy; Zelta, E.N., Ed.; Stanford University: Stanford CA, USA, 2013. [Google Scholar]
- Aristotle, H.G. Nicomachean ethics. In The Complete Works of Aristotle; Barnes, J., Ed.; The Revised Oxford Translation; Princeton University Press: Princeton, NJ, USA, 1984; Volume 2. [Google Scholar]
- Hansson, S.O. (Ed.) The Ethics of Technology: Methods and Approaches; Rowman & Littlefield: London, UK, 2017. [Google Scholar]
- Beauchamp, T. The Principle of Beneficence in Applied Ethics. In Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2008. [Google Scholar]
- Miller, D. Justice. In The Stanford Encyclopedia of Philosophy; Zelta, E.N., Ed.; Stanford University: Stanford CA, USA, 2017. [Google Scholar]
- Rawls, J. A Theory of Justice; Revised Edition; Harvard University Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Kant, I. Grounding for the Metaphysics of Morals. In Ethical Philosophy; Kant, I., Ed.; Translated by Ellington, J.W.; Hackett Publishing Co.: Indianapolis, IA, USA, 1983. First published in 1785. [Google Scholar]
- Mill, J.S. On Liberty; Spitz, D., Ed.; Norton: New York, NY, USA, 1975. First published in 1859. [Google Scholar]
- Shrader-Frechette, K.S. Environmental Justice: Creating Equality, Reclaiming Democracy; Oxford University Press: Oxford, UK; New York, NY, USA, 2002. [Google Scholar]
- Rokeach, M. Understanding Human Values: Individual and Societal; The Free Press: New York, NY, USA, 1979. [Google Scholar]
- Schwartz, S. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Advances in Experimental Social Psychology; Zanna, M.P., Ed.; Elsevier Science Publishing Co Inc.: San Diego, CA, USA, 1992; Volume 25, pp. 1–65. [Google Scholar]
- Schwartz, S.; Melech, G.; Lehmann, A.; Burgess, S.; Harris, M.; Owens, V. Extending the cross-cultural validity of the theory of basic human values with a different method of measurement. J. Cross-Cult. Psychol. 2001, 32, 519–542. [Google Scholar] [CrossRef]
- UN United Nations. Universal Declaration of Human Rights UDHR. Available online: http://www.un.org/en/universal-declaration-human-rights/ (accessed on 5 June 2018).
- EU Treaties. Available online: https://europa.eu/european-union/law/treaties_en (accessed on 5 June 2018).
- EU Charter of Fundamental Rights. Available online: http://www.europarl.europa.eu/charter/pdf/text_en.pdf (accessed on 5 June 2018).
- Borning, A.; Muller, M. Next steps for value sensitive design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 1125–1134, ISBN 978-1-4503-1015-4/12/05. [Google Scholar]
- Habermas, J. Discourse Ethics: Notes on a Program of Philosophical Justification. In Moral Consciousness and Communicative Action; Habermas, J., Ed.; Translated by Lenhardt, C. and Nicholsen, S.W.; Polity Press: Cambridge, UK, 1992; pp. 43–115. First published in 1983. [Google Scholar]
- European Committee for Standardization (CEN) Workshop Agreement. CWA Ref. No: 17145-1: 2017 E, 17145–2: 2017 E. Available online: http://satoriproject.eu/media/CWA_part_1.pdf (accessed on 7 March 2019).
- European Commission’s High-Level Expert Group on Artificial Intelligence. Draft Ethics Guidelines for Trustworthy AI. December 2018. Available online: https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai (accessed on 20 January 2019).
- Asilomar Conference 2017. Asilomar AI Principles. Available online: https://futureoflife.org/ai-principles/?cn-reloaded=1 (accessed on 15 October 2018).
- European Group on Ethics in Science and New Technologies. Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems. Available online: https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf (accessed on 16 June 2018).
- Dignum, V. The Art of AI–Accountability, Responsibility, Transparency. Available online: https://medium.com/@virginiadignum/the-art-of-ai-accountability-responsibility-transparency-48666ec92ea5 (accessed on 15 October 2018).
- Leikas, J.; Koivisto, R.; Gotcheva, N. Ethics in design of Autonomous Intelligent Systems. In Effective Autonomous Systems, VTT Framework for Developing Effective Autonomous Systems; Heikkilä, E., Ed.; VTT Technical Research Centre of Finland Ltd.: Espoo, Finland, 2018. [Google Scholar]
- Ermann, M.D.; Shauf, M.S. Computers, Ethics, and Society; Oxford University Press: New York, NY, USA, 2002. [Google Scholar]
- Carroll, J.M. Scenario-Based Design: Envisioning Work and Technology in System Development; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
- Carroll, J.M. Five reasons for scenario-based design. Interact. Comput. 2000, 13, 43–60. [Google Scholar] [CrossRef] [Green Version]
- Rosson, M.B.; Carroll, J.M. Scenario-Based Design. In The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications; Jacko, J.A., Sears, A., Eds.; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2002; pp. 1032–1050. [Google Scholar]
Expert Group/Publication | Ethical Value/Principle | Context | Technology |
---|---|---|---|
Friedman et al. (2003; 2006) [22,23] | Human welfare Ownership and property Freedom from bias Universal usability Courtesy Identity Calmness Accountability (Environmental) sustainability | Value-sensitive design | ICT |
Ethically Aligned Design (EAD) IEEE Global initiative (2016, 2017) [4,5] | Human benefit Responsibility Transparency Education and Awareness | Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: Insights and recommendations for the AI/AS technologists and for IEEE standards | AI/AS |
Asilomar AI Principles (2017) [54] | Safety Failure and juridical transparency Responsibility Value alignment Human values Privacy and liberty Shared benefit and prosperity Human control Non-supervision Avoiding arms race | Beneficial AI to guide the development of AI | AI |
The European Group on Ethics in Science and New Technologies (EGE) (2017) [55] | Human dignity Autonomy Responsibility Justice Equality and solidarity Democracy Rule of law and accountability Security Safety Bodily and mental integrity Data protection and privacy Sustainability | Statement on Artificial Intelligence, Robotics and Autonomous Systems | AI, Robotics, AS |
European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) (2018) [53] | Respect for human dignity Freedom of the individual Respect for democracy, justice and the rule of law Equality, non-discrimination and solidarity Citizens rightsBeneficence: “Do Good” Non maleficence: “Do no Harm” Autonomy: “Preserve Human Agency” Justice: “Be Fair” Explicability: “Operate transparently” | Trustworthy AI made in Europe | AI |
AI4People (2018) [3] | Beneficence Non-maleficence Autonomy Justice Explicability | An ethical framework for a good AI society | AI |
Ethical Value | Tentative Topics for Discussion |
---|---|
Integrity and human dignity | Individuals should be respected, and AIS solutions should not violate their dignity as human beings, their rights, freedoms and cultural diversity. AIS should not threaten a user’s physical or mental health. |
Autonomy | Individual freedom and choice. Users should have the ability to control, cope with and make personal decisions about how to live on a day-to-day basis, according to one’s own rules and preferences. |
Human control | Humans should choose how or whether to delegate decisions to AIS, to accomplish human-chosen objectives.* |
Responsibility | Concerns the role of people and the capability of AIS to answer for the decisions and to identify errors or unexpected results. AIS should be designed so that their affects align with a plurality of fundamental human values and rights. |
Justice, equality, fairness and solidarity | AIS should contribute to global justice and equal access. Services should be accessible to all user groups despite any physical or mental deficiencies. This principle of (social) justice goes hand in hand with the principle of beneficence: AIS should benefit and empower as many people as possible. |
Transparency | If an AIS causes harm, it should be possible to ascertain why. The mechanisms through which the AIS makes decisions and learns to adapt to its environment should be described, inspected and reproduced. Key decision processes should be transparent and decisions should be the result of democratic debate and public engagement. |
Privacy | People should have the right to access, manage and control the data they generate. |
Reliability | AIS solutions should be sufficiently reliable for the purposes for which they are being used. Users need to be confident that the collected data is reliable, and that the system does not forward the data to anyone who should not have it. |
Safety | Safety is an emerging property of a socio-technical system, which is created daily by decisions and activities. Safety of a system should be verified where applicable and feasible. Need to consider possible liability and insurance implications. |
Security | AI should be secure in terms of malicious acts and intentional violations (unauthorized access, illegal transfer, sabotage, terrorism, etc.). Security of a system should be verified where applicable and feasible. |
Accountability | Decisions and actions should be explained and justified to users and other stakeholders with whom the system interacts. |
Explicability | Also ‘explainability’; necessary in building and maintaining citizen’s trust (captures the need for accountability and transparency), and the precondition for achieving informed consent from individuals. |
Sustainability | The risks of AIS being misused should be minimized: Awareness and education. Note “precautionary principle”: Scientific uncertainty of risk or danger should not hinder to start actions of protecting the environment or to stop usage of harmful technology. |
Role of technology in society | Governance: Society should use AIS in a way that increases the quality of life and does not cause harm to anyone. Depending on what type of theory of justice a society is committed to, it may stress e.g., the principle of social justice (equality and solidarity), or the principle of autonomy (and values of individual freedom and choice). |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Leikas, J.; Koivisto, R.; Gotcheva, N. Ethical Framework for Designing Autonomous Intelligent Systems. J. Open Innov. Technol. Mark. Complex. 2019, 5, 18. https://doi.org/10.3390/joitmc5010018
Leikas J, Koivisto R, Gotcheva N. Ethical Framework for Designing Autonomous Intelligent Systems. Journal of Open Innovation: Technology, Market, and Complexity. 2019; 5(1):18. https://doi.org/10.3390/joitmc5010018
Chicago/Turabian StyleLeikas, Jaana, Raija Koivisto, and Nadezhda Gotcheva. 2019. "Ethical Framework for Designing Autonomous Intelligent Systems" Journal of Open Innovation: Technology, Market, and Complexity 5, no. 1: 18. https://doi.org/10.3390/joitmc5010018
APA StyleLeikas, J., Koivisto, R., & Gotcheva, N. (2019). Ethical Framework for Designing Autonomous Intelligent Systems. Journal of Open Innovation: Technology, Market, and Complexity, 5(1), 18. https://doi.org/10.3390/joitmc5010018