A Practicable Operationalisation of Meaningful Human Control
Abstract
:1. Introduction
2. Background
2.1. Historical Overview
2.2. The Need for a Working Framework of MHC
3. Methods
3.1. Finding Common Ground: A Demonstration
3.2. Methodological Process
by using “context-control” the predictability increase, by “understanding the weapon” the use of it will be more predictable and by “understanding the environment” the autonomous weapon system’s interaction with it will be more predictable.
3.3. Data Collection and Processing
- [] Awareness
- [] Weaponeering
- [] Context Control
- [] Predictability
- [] Accountability
4. Results: The Integrated Framework
5. Facets
5.1. [] Awareness
Control is first and foremost based on knowledge of the weapon system. A thorough understanding by the operator and the commander of the selected weapon’s functions and effects, coupled with contextual information (such as awareness of the situation on the ground, what other objects are in the target area, etc), contribute to the assessment of whether a weapon is appropriate for a particular attack.
- Ignorance of the system. The controller is poorly trained (Asaro 2012), has an incomplete understanding of the system (Cummings 2004), does not understand how it makes decisions (UNIDIR 2016), or insufficiently appreciates its capabilities (Santoni de Sio and van den Hoven 2018).
- Automation bias. The controller places too much trust in the AI which leads to complacency (Chengeta 2016; Leveringhaus 2016), while in reality, they are unjustifiably overestimating (or ignorant of) the AI’s accuracy or reliability rates (Parasuraman et al. 2000).
- Lack of situational awareness. The controller does not have sufficient context of the environment in which the system is operating and how it can impact the system to make a correct intervention (Cummings 2004; ICRC 2018).
5.2. [] Weaponeering
5.3. [] Context Control
5.4. [] Predictability
5.5. [] Accountability
5.6. [] Assessment Awareness
6. Processes
6.1. [→,] Awareness Informs Weaponeering and Context Control
6.2. [,→] Awareness and Context Control Permit Prediction
6.3. [,,,→] Processes Leading to Accountability
[T]he limits of control over, or the unpredictability of, an autonomous weapon system could make it difficult to find individuals involved in the programming and deployment of the weapon liable for serious violations of IHL. They may not have the knowledge or intent required for such a finding, owing to the fact that the machine can select and attack targets independently.
7. Concluding Remarks
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Additional Protocol I. 1977. Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts. Adopted 8 June 1977, Entered into Force 7 December 1978, 1125 UNTS 3. Geneva: ICRC. [Google Scholar]
- Anderson, Kenneth, and Matthew C. Waxman. 2013. Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. In American University Washington College of Law Research Paper No. 2013-11. Stanford: Stanford University. [Google Scholar]
- Article 36. 2013a. Killer Robots: UK Government Policy on Fully Autonomous Weapons. Technical Report. London: Article 36. [Google Scholar]
- Article 36. 2013b. Structuring debate on autonomous weapons systems. Paper presented at Technical Report Memorandum for delegates to the Convention on Certain Conventional Weapons (CCW), Geneva, Switzerland, November 14–15. [Google Scholar]
- Article 36. 2016. Key elements of meaningful human control, Background paper to comments prepared by Richard Moyes, Managing Partner, Article 36. Paper presented at Technical Report Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, Switzerland, April 11–15. [Google Scholar]
- Article 36. 2017. Autonomous weapon systems: Evaluating the capacity for ‘meaningful human control’ in weapon review processes. Paper presented at Technical Report Discussion paper for the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts meeting on Lethal Autonomous Weapons Systems (LAWS), Geneva, Switzerland, November 13–17. [Google Scholar]
- Arya, Vijay, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, and et al. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv arXiv:1909.03012. [Google Scholar]
- Asaro, Peter. 2012. On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 94: 687–709. [Google Scholar] [CrossRef] [Green Version]
- Beerli, Christine. 2014. The challenges raised by increasingly autonomous weapons. Paper presented at Statement at the ICRC Autonomous Weapons Panel, Geneva, Switzerland, June 24. [Google Scholar]
- Billings, Charles E. 1991. Human-Centered Aviation Automation: Principles and Guidelines; Technical Report NASA Technical Memorandum 110381; Moffett Field: NASA.
- Boardman, Michael, and Fiona Butcher. 2019. An Exploration of Maintaining Human Control in AI Enabled Systems and the Challenges of Achieving It. In Workshop on Big Data Challenge-Situation Awareness and Decision Support. Brussels: North Atlantic Treaty Organization Science and Technology Organization. Porton Down: Dstl Porton Down. [Google Scholar]
- Boddens Hosang, J. F. R. 2021. Control Through ROE in Military Operations: Autonomous Weapons and Cyber Operations as Reasons to Change the Classic ROE Concept? In Military Operations and the Notion of Control Under International Law. Edited by R. Bartels, J. C. van den Boogaard, P. A. L. Ducheine, E. Pouw and J. Voetelink. Berlin: Springer, pp. 393–420. [Google Scholar]
- Boer, Alexander, and Tom van Engers. 2013. Agile: A Problem-Based Model of Regulatory Policy Making. Artificial Intelligence and Law 21: 399–423. [Google Scholar] [CrossRef]
- Boothby, William. 2021. Control in Weapons Law. In Military Operations and the Notion of Control Under International Law. Edited by R. Bartels, J. C. van den Boogaard, P. A. L. Ducheine, E. Pouw and J. Voetelink. Berlin: Springer, pp. 369–92. [Google Scholar]
- Boothby, William H. 2018. Dehumanization: Is There a Legal Problem Under Article 36? In Dehumanization of Warfare: Legal Implications of New Weapon Technologies. Edited by W. H. von Heinegg, R. Frau and T. Singer. New York: Springer International Publishing AG, pp. 21–52. [Google Scholar]
- Boothby, William H. 2019. Highly Automated and Autonomous Technologies. In New Technologies and the Law of War and Peace. Edited by W. H. Boothby. Cambridge: Cambridge University Press, pp. 137–81. [Google Scholar]
- Bothe, Michael, Karl Josef Partsch, and Waldemar A. Solf, eds. 2013. New Rules for Victims of Armed Conflict: Commentary on the Two 1977 Protocols Additional to the Geneva Conventions of 1949, 2nd ed. Leiden: Martinus Nijhoff. [Google Scholar]
- Brazil. 2019. Statement by Brazil. Paper presented at Technical Report 2019 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, Switzerland, March 25–29. [Google Scholar]
- Campaign to Stop Killer Robots. 2012. About Us. Available online: https://www.stopkillerrobots.org/about-us/#about (accessed on 11 March 2020).
- Chengeta, Thompson. 2016. Defining the emerging notion of ‘Meaningful Human Control’ in autonomous weapon systems. International Law and Politics 49: 833–90. [Google Scholar]
- Corn, Geoffrey S. 2014. War, law, and the oft overlooked value of process as a precautionary measure. Pepperdine Law Review 42: 419–66. [Google Scholar]
- Crootof, Rebecca. 2015. The Killer Robots are here: Legal and Policy Implications. Cardozo Law Review 36: 1837–915. [Google Scholar]
- Crootof, Rebecca. 2016. A Meaningful Floor for “Meaningful Human Control”. Temple International & Comparative Law Journal 30: 53–62. [Google Scholar]
- Cummings, M. L. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. Paper presented at American Institute of Aeronautics and Astronautics 1st Intelligent Systems Technical Conference, Chicago, IL, USA, September 20–22. [Google Scholar]
- Curtis E. Lemay Center. 2019. Air Force Doctrine Publication 3-60—Targeting. Chicago: Curtis E. Lemay Center. [Google Scholar]
- de Jonogh, Sandra. 2019. Statement of the Netherlands delivered at the Group of Experts on LAWS, Geneva, Switzerland, April 26. Technical Report. [Google Scholar]
- Defense Innovation Board. 2019. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense Defense Innovation Board. Technical Report. Washington, DC: Department of Defense. [Google Scholar]
- Defense Science Board. 2012. Memorandum. In The Role of Autonomy in DoD Systems. Washington, DC: Department of Defense. [Google Scholar]
- Department of the Army. 2019. The Operations Process. Technical Report ADP 5-0, Washington, DC, 31 July 2019. Replacing ADP 5-0, Dated 17 May 2012, and ADRP 5-0, Dated 17 May 2012. Washington, DC: Department of the Army. [Google Scholar]
- Ducheine, Paul, and Terry Gill. 2018. From Cyber Operations to Effects: Some Targeting Issues. Militair Rechtelijk Tijdschrift 111: 37–41. [Google Scholar]
- Ekelhof, Merel. 2016. Human control in the targeting process. In Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. Versoix: ICRC, pp. 53–56. [Google Scholar]
- Ekelhof, Merel. 2019. Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation. Global Policy 10: 343–48. [Google Scholar] [CrossRef] [Green Version]
- Eklund, Amanda Musco. 2020. Meaningful Human Control of Autonomous Weapon Systems: Definitions and Key Elements in the Light of International Humanitarian Law and International Human Rights Law. Stockholm: Totalförsvarets Forskningsinstitut. [Google Scholar]
- Estreicher, Samuel. 2011. Privileging Asymmetric Warfare? Part I: Defender Duties under International Humanitarian Law. Chicago Journal of International Law 11: 425. [Google Scholar]
- Fleck, Dieter, ed. 2013. The Handbook of International Humanitarian Law, 3rd ed. Oxford: Oxford University Press. [Google Scholar]
- Fleming, Nic. 2009. Campaign Asks for International Treaty to Limit War Robots. Available online: https://www.newscientist.com/article/dn17887-campaign-asks-for-international-treaty-to-limit-war-robots (accessed on 11 March 2020).
- Frank, Andrew U., Steffen Bittner, and Martin Raubal. 2001. Spatial and Cognitive Simulation with Multi-agent Systems. In Spatial Information Theory, Foundations of Geographic Information Science International Conference, COSIT 2001, Morro Bay, CA, USA, 19–23 September 2001. Edited by Daniel R. Montello. Berlin: Springer. [Google Scholar]
- Freitas, Alex A. 2014. Comprehensible classification models. ACM SIGKDD Explorations Newsletter 15: 1–10. [Google Scholar] [CrossRef]
- Future of Life Institute. 2015. Autonomous Weapons: An Open Letter from Al & Robotics Researchers. Available online: https://futureoflife.org/open-letter-autonomous-weapons (accessed on 3 August 2017).
- Garcia, Denise. 2014. ICRAC Statement on Technical Issues to the 2014 UN CCW Expert Meeting, International Committee for Robot Arms Control. Technical Report. ICRAC. Available online: https://www.icrac.net/icrac-statement-on-technical-issues-to-the-2014-un-ccw-expert-meeting (accessed on 30 May 2021).
- Geiß, Robin, and Henning Lahmann. 2017. Autonomous weapons systems: A paradigm shift for the law of armed conflict? In Research Handbook on Remote Warfare. Edited by J. D. Ohlin. Cheltenham: Edward Elgar, pp. 371–404. [Google Scholar]
- Gill, Terry D., and Dieter Fleck, eds. 2010. The Handbook of the International Law of Military Operations. Oxford: Oxford University Press. [Google Scholar]
- Goussac, Netta. 2019. Safety Net or Tangled Web: Legal Reviews of AI in Weapons and War-Fighting. Available online: https://blogs.icrc.org/law-and-policy/2019/04/18/safety-net-tangled-web-legal-reviews-ai-weapons-war-fighting (accessed on 26 May 2021).
- Greece. 2019. Potential Challenges Posed by Emerging Technologies in the Area of Lethal Autonomous Weapons Systems to International Humanitarian Law. Paper presented at Technical Report Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS), Geneva, Switzerland, March 25–29. [Google Scholar]
- Group of Governmental Experts. 2019. Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects: Draft Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System. Technical Report CCW/GGE.1/2019/CRP.1/Rev.2. Geneva: GGE LAWS. [Google Scholar]
- Heyns, Christof. 2013. Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. Technical Report A/HRC/23/47. New York: United Nations. [Google Scholar]
- Holland Michel, Arthur. 2020. The Black Box, Unlocked: Predictability and Understandability in Military AI. Technical Report. Geneva: United Nations Institute for Disarmament Research. [Google Scholar] [CrossRef]
- Horowitz, Michael C., and Paul Scharre. 2015. Meaningful Human Control in Weapon Systems: A Primer. Technical Report Center for a New American Security, Working Paper No. 031315. Washington, DC: Center for a New American Security. [Google Scholar]
- Huffman, Walter B. 2012. Margin of Error: Potential Pitfalls of the Ruling in The Prosecutor v. Ante Gotovina. Military Law Review 211: 1–56. [Google Scholar]
- Human Rights Watch. 2012. Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch. [Google Scholar]
- Human Rights Watch. 2015. Mind the Gap: The Lack of Accountability for Killer Robots. New York: Human Rights Watch. [Google Scholar]
- Human Rights Watch. 2016. Killer Robots and the Concept of Meaningful Human Control. Technical Report Memorandum to the Convention on Conventional Weapons (CCW), Delegates, April 2016. In Cooperation with the Harvard International Human Rights Clinic. Available online: https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control (accessed on 29 April 2022).
- ICRC. 2013. Autonomous Weapons: States must Address Major Humanitarian, Ethical Challenges. Available online: https://www.icrc.org/eng/resources/documents/faq/q-and-a-autonomous-weapons.htm (accessed on 18 October 2014).
- ICRC. 2014. Report of the ICRC Expert Meeting on ‘Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects’, 26–28 March 2014, Geneva. Technical Report. Geneva: ICRC. [Google Scholar]
- ICRC. 2016a. Autonomous weapons: Decisions to kill and destroy are a human responsibility. Paper presented at Statement of the ICRC, Read at the Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, Switzerland, April 11–16. Technical Report. [Google Scholar]
- ICRC. 2016b. Background paper prepared by the International Committee of the Red Cross. In Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. Versoix: ICRC, pp. 69–85. [Google Scholar]
- ICRC. 2016c. Views of the ICRC on autonomous weapon systems. Paper presented at ICRC prepared for the Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, Switzerland, April 11–16. [Google Scholar]
- ICRC. 2017. Expert Meeting on Lethal Autonomous Weapons Systems. Geneva: ICRC. [Google Scholar]
- ICRC. 2018. Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control? Technical Report Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, CCW/GGE.1/2018/WP. Geneva: GGE LAWS. [Google Scholar]
- ICRC. 2019a. Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach. Geneva: ICRC. [Google Scholar]
- ICRC. 2019b. Statement of the International Committee of the Red Cross (ICRC) under agenda item 5(b). Paper presented at Convention on Certain Conventional Weapons (CCW), Group of Governmental Experts on Lethal Autonomous Weapons Systems, Geneva, Switzerland, March 25–29. Technical Report. [Google Scholar]
- IPRAW. 2019a. Statement by iPRAW during the CCW GGE on LAWS: Human Control. Technical Report. Geneva: GGE LAWS. [Google Scholar]
- IPRAW. 2019b. Statement on Agenda 5c (Human Element). Technical Report Statement by iPRAW during the CCW GGE on LAWS: National Weapon Reviews; 23 September 2019. Geneva: GGE LAWS. [Google Scholar]
- Jensen, Eric Talbot. 2018. The Human Nature of International Humanitarian Law, Humanitarian Law & Policy. Available online: https://blogs.icrc.org/law-and-policy/2018/08/23/human-nature-international-humanitarian-law (accessed on 28 May 2021).
- Jensen, Eric Talbot. 2020. The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict. SSRN Electronic Journal 96: 26–57. [Google Scholar] [CrossRef]
- Knight, Will. 2019. Military Artificial Intelligence can be Easily and Dangerously Fooled. Available online: https://www.technologyreview.com/2019/10/21/132277/military-artificial-intelligence-can-be-easily-and-dangerously-fooled (accessed on 25 December 2020).
- Kwik, Jonathan, and Tom Van Engers. 2021. Algorithmic fog of war: When lack of transparency violates the law of armed conflict. Journal of Future Robot Life 2: 43–66. [Google Scholar] [CrossRef]
- Leveringhaus, Alex. 2016. Ethics and Autonomous Weapons. London: Palgrave Macmillan UK. [Google Scholar] [CrossRef]
- Marauhn, Thilo. 2018. Meaningful Human Control—And the Politics of International Law. In Dehumanization of Warfare: Legal Implications of New Weapon Technologies. Edited by W. H. von Heinegg, R. Frau and T. Singer. New York: Springer International Publishing AG, pp. 207–18. [Google Scholar]
- Meier, Michael W. 2017. The strategic implications of lethal autonomous weapons. In Research Handbook on Remote Warfare. Edited by J. D. Ohlin. Cheltenham: Edward Elgar, pp. 443–78. [Google Scholar]
- Miller, Tim. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267: 1–38. [Google Scholar] [CrossRef]
- Ministère des Armées (France). 2019. L’intelligence Artificielle au Service de la défense. Paris: Ministère des Armées (France). [Google Scholar]
- Molnar, Cristoph. 2019. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Lean Publishing. Available online: https://christophm.github.io/interpretable-ml-book/ (accessed on 29 April 2022).
- Moyes, Richard. 2016. Autonomous weapon systems and the alleged responsibility gap. In Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. Versoix: ICRC, pp. 46–52. [Google Scholar]
- North Atlantic Treaty Organisation. 2016. Allied Joint Doctrine for Joint Targeting, Edition A Version 1 (April 2016). Technical Report AJP-3.9. Brussels: NATO. [Google Scholar]
- North Atlantic Treaty Organisation. 2019. Allied Joint Doctrine for the Conduct of Operations. In Technical Report NATO Standard, AJP-3. Edition C Version 1. Brussels: NATO. [Google Scholar]
- Office of the Chairman of the Joint Chiefs of Staff. 2020. DOD Dictionary of Military and Associated Terms, as amended. Washington, DC: The Joint Staff. [Google Scholar]
- Parasuraman, R., T. B. Sheridan, and C. D. Wickens. 2000. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics 30: 286–97. [Google Scholar] [CrossRef] [Green Version]
- Puckett, Christopher B. 2004. In This Era of Smart Weapons, Is a State under an International Legal Obligation to Use Precision-Guided Technology in Armed Conflict. Emory International Law Review 18: 645–724. [Google Scholar]
- Righetti, Ludovic. 2016. Emerging technology and future autonomous weapons. In Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. Versoix: ICRC, pp. 36–39. [Google Scholar]
- Roff, Heather M. 2016. Meaningful Human Control or Appropriate Human Judgment? The Necessary Limits on Autonomous Weapons. Paper presented at Technical Report Briefing Paper for the Delegates at the Review Conference on the Convention on Certain Conventional Weapons, Geneva, Switzerland, December 12–16. [Google Scholar]
- Roff, Heather M., and Richard Moyes. 2016. Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. Technical Report Briefing Paper for the Delegates at the Convention on Certain Conventional Weapons Informal Meeting of Experts on Lethal Autonomous Weapons Systems. London: Article 36. [Google Scholar]
- Rome Statute. 1998. Rome Statute of the International Criminal Court, 2187 UNTS 90. The Hague: International Criminal Court. [Google Scholar]
- Roorda, Mark. 2015. NATO’s Targeting Process: Ensuring Human Control Over (and Lawful Use of) ‘Autonomous’ Weapons’. In Autonomous Systems: Issues for Defence Policymakers. Edited by A. P. Williams and P. D. Scharre. The Hague: NATO, pp. 152–68. [Google Scholar]
- Santoni de Sio, Filippo, and Jeroen van den Hoven. 2018. Meaningful Human Control over Autonomous Systems: A Philosophical Account. Frontiers in Robotics and AI 5: 1–14. [Google Scholar] [CrossRef] [Green Version]
- Sassoli, Marco. 2014. Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified. International Law Studies 90: 308–40. [Google Scholar]
- Scharre, Paul, and Michael C. Horowitz. 2015. An Introduction to Autonomy in Weapon Systems. Technical Report Center for a New American Security, Working Paper, Feb. 2015. Washington, DC: Center for a New American Security. [Google Scholar]
- Scharre, Paul D. 2014. Autonomy, “Killer Robots,” and Human Control in the Use of Force. Available online: https://justsecurity.org/12708/autonomy-killer-robots-human-control-force-partandjustsecurity.org/12712/autonomy-killer-robots-human-control-force-part-ii (accessed on 10 June 2021).
- Schmitt, Michael N. 2015. Regulating Autonomous Weapons Might be Smarter Than Banning Them. Available online: https://www.justsecurity.org/25333/regulating-autonomous-weapons-smarter-banning (accessed on 5 November 2017).
- Schmitt, Michael N., and Jeffrey S. Thurnher. 2013. “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict. Harvard Law School National Security Journal 4: 231–81. [Google Scholar]
- Schuller, Alan. 2017. At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law. Harvard National Security Journal 8: 379. [Google Scholar]
- Simon-Michel, Jean-Hugues. 2014. Report of the 2014 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). Geneva: Technical Report UN Office. [Google Scholar]
- Solis, Gary D. 2010. The Law of Armed Conflict. Cambridge: Cambridge University Press. [Google Scholar]
- Sparrow, Robert. 2007. Killer Robots. Journal of Applied Philosophy 24: 62–77. [Google Scholar] [CrossRef]
- Sparrow, Robert. 2016. Robots and Respect: Assessing the Case Against Autonomous Weapon Systems. Ethics & International Affairs 30: 93–116. [Google Scholar]
- State of Israel. 2019. Statement by Mr. Asaf Segev, Arms Control Department, Ministry of Foreign Affairs, Israel. Paper presented at Technical Report Meeting of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, Switzerland, March 26. [Google Scholar]
- Stürchler, Nikolas, and Michael Siegrist. 2017. A “Compliance-Based” Approach to Autonomous Weapon Systems. Available online: https://www.ejiltalk.org/a-compliance-based-approach-to-autonomous-weapon-systems/ (accessed on 7 July 2021).
- Szpak, Agnieszka. 2020. Legality of Use and Challenges of New Technologies in Warfare – the Use of Autonomous Weapons in Contemporary or Future Wars. European Review 28: 118–31. [Google Scholar] [CrossRef]
- Thurnher, Jeffrey S. 2012. No One at the Controls: Legal Implications of Fully Autonomous Targeting. Joint Force Quarterly 67: 77–84. [Google Scholar]
- Thurnher, Jeffrey S. 2014. Examining Autonomous Weapon Systems from a Law of Armed Conflict Perspective. In New Technologies and the Law of Armed Conflict. Edited by H. Nasu and R. McLaughlin. The Hague: T.M.C. Asser Press, pp. 213–28. [Google Scholar]
- UNIDIR. 2014. The Weaponization of Increasingly Autonomous Technologies: Considering How Meaningful Human Control Might Move the Discussion Forward. Technical Report UNIDIR Resources, No. 2. Geneva: UNIDIR. [Google Scholar]
- UNIDIR. 2016. Safety, Unintentional Risk and Accidents in the Weaponization of Increasingly Autonomous Technologies. Technical Report UNIDIR Resources No. 5. Geneva: UNIDIR. [Google Scholar]
- United Kingdom. 2020. Expert Paper: The Human Role in Autonomous Warfare, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva, 21–25 September 2020 and 2–6 November 2020, Agenda Item 5. Technical Report CCW/GGE.1/2020/WP.6. Geneva: GGE LAWS. [Google Scholar]
- United States of America. 2017. Intervention on Appropriate Levels of Human Judgment over the Use of Force delivered by John Cherry. Paper presented at Technical Report Convention on Certain Conventional Weapons (CCW), Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), Geneva, Switzerland, November 15. [Google Scholar]
- van den Boogaard, Jeroen C., and Mark P. Roorda. 2021. ‘Autonomous’ Weapons and Human Control. In Military Operations and the Notion of Control Under International Law. Edited by R. Bartels, J. C. van den Boogaard, P. A. L. Ducheine, E. Pouw and J. Voetelink. Berlin: Springer, pp. 421–39. [Google Scholar]
- Winikoff, Michael, Lin Padgham, and James Harland. 2001. Simplifying the Development of Intelligent Agents. In AI 2001: Advances in Artificial Intelligence, Adelaide, Australia, December 10–14. Berlin/Heidelberg: Springer. [Google Scholar]
1 | For summaries of the CCW discussions, see (Group of Governmental Experts 2019; Marauhn 2018; Meier 2017). |
2 | |
3 | Another feasible explanation, posited by (Chengeta 2016), is that there are irreconcilable normative views between stakeholders and that, as a result, they (either deliberately or not) will indefinitely postpone agreeing on the criteria for MHC. |
4 | Eklund’s elements can be summarised as follows:
A seventh point, ‘Ethical considerations and the principle of human dignity’, is also given as a key element, but is formulated more as a legal/moral basis for MHC rather than an expression of MHC itself. As such, it is not included in the list presented above. |
5 | As not all sources use the complete term ‘meaningful human control’, the terms ‘human control’ or ‘control’ were deemed sufficient for inclusion, as long as it was clear that the paper/statement was written in the context of the MHC debate or that the author intended it to be read in that context. |
6 | In information technology, a percept refers to any input from the environment sensed by the agent, which it subsequently uses to make decisions (Winikoff et al. 2001). |
7 | Another actor is featured in the graph, however: the Operator. Their role is tied to facet [] and limited to supervision and intervention. This is explained in more detail in Section 5.3. Under this framework, the primary cognitive burden, and thus responsibility for the deployment decision, remains with the Deployer. |
8 | This does not detract from the fact that these sources convincingly argue that human judgment is also exercised, often in decisive ways, prior to the targeting process. |
9 | This is often referred to as ‘narrow AI’ (Scharre 2014). |
10 | For example, if an AWS is not designed to make classifications between civilian persons and combatants, it is necessary for the Deployer to know if civilians will be present in the field (Geiß and Lahmann 2017). |
11 | This refers to a human always being able to veto a system’s decisions (Scharre and Horowitz 2015). |
12 | Of course, the choice could always be made to simply assign accountability to a person—e.g., the Deployer—even in the absence of any knowledge or intent, for example through strict liability. However, many have justifiably argued that this would be fundamentally unfair and contrary to common principles of law (Brazil 2019). |
13 | This is generally true for both legal and moral accountability, even though the exact scope of these elements may vary. See e.g., (Additional Protocol I 1977) (IHL); (Rome Statute 1998) (criminal law); (Marauhn 2018; Sparrow 2007) (moral/philosophical). |
14 | There generally is no problem of accountability for systems which apply [], as the Operator (the system pilot or supervisor) would be responsible for (and directly cause) the system’s final decision. |
Requirement | (A) Article 36 (Moyes 2016) | (B) (ICRC 2016b) |
---|---|---|
Requirement 1 | Predictable, reliable and transparent technology | Predictability of the weapon system |
Requirement 2 | Accurate information on the outcome sought, the technology, and the context of use | Reliability of the weapon system |
Requirement 3 | Timely human judgment and action | Human intervention during development, deployment and use |
Requirement 4 | The potential for timely intervention | Knowledge about the functioning of the weapon system and the context of use |
Requirement 5 | A framework of accountability | Accountability for its use |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kwik, J. A Practicable Operationalisation of Meaningful Human Control. Laws 2022, 11, 43. https://doi.org/10.3390/laws11030043
Kwik J. A Practicable Operationalisation of Meaningful Human Control. Laws. 2022; 11(3):43. https://doi.org/10.3390/laws11030043
Chicago/Turabian StyleKwik, Jonathan. 2022. "A Practicable Operationalisation of Meaningful Human Control" Laws 11, no. 3: 43. https://doi.org/10.3390/laws11030043
APA StyleKwik, J. (2022). A Practicable Operationalisation of Meaningful Human Control. Laws, 11(3), 43. https://doi.org/10.3390/laws11030043