Meaningful Human Agency in Automated Weapon Systems: a Plea for Human-in-the-loop Regulation

Abstract

This essay will critically assess how automated human-in-the-loop weapon systems affect human agency and which legal regulatory framework should be applied to such systems. Against this backdrop, I will first draw on the concept of human agency and its importance for international humanitarian law; next, I will show how human agency is affected by algorithmic (targeting) assistants; and, finally, I will conclude with a plea for comprehensive design- and used-based regulation of automated weapon systems.

I. Introduction

“Code is Law”1 is a famous quote by the American legal scholar Lessig, according to whom ‘architecture’, i.e. the code-design of a given online environment, would be the predetermining factor of regulating human’s behaviour in cyberspace. His idea seems particularly alarming in light of the increased and speedy development of algorithmic decision-making assistants’, whose architecture (code-design) and underlying biases may manifestly influence our behaviour.

One susceptible area that could make significant use of Artificial Intelligence (AI) is the military sector.2 Uncontestably, the most critical employment of AI technology in this sector remains the use of Lethal Autonomous Weapon Systems (LAWS). Much has been discussed about this topic in the framework of the Convention on Certain Conventional Weapons (CCW) since 2014; however, no binding regulation has been achieved. Notwithstanding those discussions, the use of algorithmic decision-making assistants in the targeting process of automated weapon systems has seldomly been examined, even though their use may have just as a significant impact on human agency, i.e. the capability of human agents to take their own decisions. In particular, if we take Lessig’s “Code is Law” idea into account, the very design of the automated system could be crucially influencing the human operator’s behaviour in one way or the other. As targeting is the most “critical function” 3 of a weapon system, the focus of this paper lies only on this most consequential topic.4

‘Algorithmic targeting assistant’ is a generic term used for different types of AI systems that can be used on the battlefield.5 Previous debates about automated weapon systems largely focused on distinguishing between human-in-the-loop systems, which require a human operator to engage in targeting, and human-out-the-loop systems, which do not presuppose a human operator to act, in order to differentiate lawful from unlawful uses. However, this binary view neglects that automated in-the-loop systems highly affect human agency, which can be diminished to not allow genuine human targeting decisions anymore.

This essay will examine whether the CCW focus on autonomous human-out-of-the-loop systems in the military is sufficient and which role automated human-in-the-loop systems should play in a regulatory framework. Against this backdrop, I will first draw on the concept of human agency and its importance for international humanitarian law (Part B); next, I will show how it is affected by algorithmic (targeting) assistants (Part C); and, finally, I will conclude with a plea for comprehensive regulation of automated weapon systems (Part D).

II. Human Agency and International Humanitarian Law

Human agency has occupied research across disciplines for decades. In 1988, the two sociologists Emirbayer and Mische defined the concept of human agency as a “temporally constructed engagement by actors of different structural environments […] which, through the interplay of habit, imagination, and judgment, both reproduces and transforms those structures in interactive response to the problems posed by changing historical situations.”6 For our purposes, it is crucial to note that human agency, hence, largely depends on “habit, imagination, and judgment,” reproducing and transforming changing problems over time in non-deterministic processes.

Since this paper’s objects of discussion are automated weapon systems, the normative question surrounding the role of human agency should mainly focus on International Humanitarian Law (IHL), the body of law which establishes the rules of armed conflicts. The title of an International Committee of the Red Cross’s (ICRC) position paper from 2019 – “Artificial intelligence and machine learning in armed conflict: A human-centred approach” – is already indicative of how crucial human agency is for compliance with IHL. It clearly states that “it is humans that comply with and implement the law, and it is humans who will be held accountable for violations.”7

Notwithstanding this position by the ICRC, the necessity of human agency is disputed by scholars in the context of accountability when enforcing the rules governing armed conflict.8 Yet, all agree that the possibility of attributing accountability is the prerequisite for any new weapon compliance with international humanitarian law,9 as required by Art 36 Additional Protocol I to the Geneva Conventions. The trouble in using machine learning-based technology for guaranteeing accountability is the so-called ‘black-box’,10 which makes it impossible for the human actor to identify why the system came to any given outcome. Thus, any introduction of nonhuman agency on the battlefield raises significant legal challenges of accountability against the ICRC’s view.

Furthermore, weapon systems need to be able to comply with the rules of IHL, including the principle of distinction, the discrimination rule, and the precautionary principle.11 Some take the position that in order to comply with those rules, “there must be some human being, even if he is geographically removed from the target, who obtains information in real time and decides whether or not the target is legitimate.”12 Hence, human agency over weapon systems can be seen as a tenet of IHL.13

III. Algorithmic Targeting Assistants in the Military

This section questions the widely accepted assumption that the requirement of human agency in IHL is fulfilled with a human-in-the-loop,14 and focuses on algorithmic targeting assistants to assess their impact on human agency.

A. Algorithmic Assistants affecting Human Agency

As a reminder, algorithmic assistants, as opposed to autonomous systems, presuppose formally the human-in-the-loop, thus having the human operator ‘controlling’ the system at hand and taking a final decision based on an algorithmic recommendation.

The use of algorithmic assistants has apparent advantages: “they offer speed, lower transaction costs and efficiency in decision-making, thereby enabling the user to enjoy lower cost and higher quality products.”15 Furthermore, they may allow the user “to make more sophisticated choices,”16 programming the algorithm to avoid human biases that could potentially know even better than ourselves what is good for us, based on a predetermined set of rules. However, it is important to highlight that this line of thought misses that instead of eradicating any error in decision-making, algorithmic assistants may reproduce others.17 Furthermore, the skills commonly assigned to algorithmic assistants influence users’ own capacity to act as human agents, capable of making their own choices, on which we shall focus.

Yeung points out two major problems associated with algorithmic decision-making that cannot be overcome by the mere formal authority of the human operator in-the-loop.18 The first is the so-called “automation bias,” namely the human tendency to trust computational judgments even if logical reasoning would result in disobeying the system.19 Subsequent experiments have shown that humans have indeed such an “algorithmic appreciation”, preferring algorithmic over human judgment.20 The second problem identified by Yeung elaborates on Lessig’s contribution about the significance of system architecture controlling our behaviour. According to her, algorithmic recommender systems shape powerfully and subtly perceptions as well as behaviour, amounting to a so-called ‘hypernudge’, which undermines the human judgment.21 Yeung’s idea of hypernudges builds upon the ‘nudging’ concept by Thaler and Sunstein,22 the idea to construct a particular architecture with the intent of altering people’s behaviour. According to her, Big Data used for algorithmic decision-guidance techniques constitute extreme forms of nudging, so-called ‘hypernudges’, that “are extremely powerful and potent due to their networked, continuously updated, dynamic and pervasive nature.”23 In addition to those two findings, one can add the problem of the so-called “paradox of choice”24 (or choice overload), which means that the overturn of a decision by an algorithmic assistant entails an act of choice which may be subject to decision-making paralysis, anxiety, and perpetual stress.25 Thus, the overabundance of options created by an algorithmic assistant might negatively impact its quick use, which is supposed to be one of its most significant advantages.

It is useful to link these challenges posed by algorithmic assistants back to our definition of human agency, which uses the criteria habit, imagination, and judgment as crucial for determining the degree of human agency. Automation-bias affects our imagination, making us trusting algorithms more than our intuitions and, thus, increasing the loss of our own way of thinking. Likewise, hypernudges affect our judgement by pushing us towards a certain desired outcome. Automation-bias and hypernudges, as well as the “paradox of choice”-effect might also change our habits as we develop a new human-technology relationship. Hence, we can conclude that habit, imagination, and judgment are indeed highly affected by algorithmic assistants.26

B. Impact of Algorithmic Targeting Assistants on Human Agency

One should disentangle different degrees of autonomy that can be encoded in algorithmic targeting assistants with diverging grades of intensity on the impact on human agency.

Amoroso et al. developed a helpful classification of human supervisory control of weapons, distinguishing between five levels of autonomy, ranging from full human autonomy (1) to full machine autonomy (5).27 Human-out-of-the-loop systems for targeting purposes of human beings are largely considered illegal because an accountability gap would occur.28 Those systems operating under Stage 4 (“software selects target and human has restricted time to veto “) and 5 (“software selects target and initiates attack without human involvement”) would fall under this illegal and autonomous use, leaving Level 1-3 for discussion surrounding the human-in-the-loop.

Systems operating under Level 3 (“Software selects the target and a human must approve it before the attack”) are highly susceptible to automation-bias, the tendency to trust computational judgment more than their intuitions. Carr’s theory was explicitly proven in the context of automated weapon systems, showing that in the case of incorrect algorithmic recommendations, operators at Level 3 relied significantly more often on those decisions than those working with Level 2 systems.29 Furthermore, a second empirical study by Hawley is of interest. He studied the case of the US Army’s Patriot air defense system, which has two operating modes: semi-automatic and automatic. The latter is an out-of-the-loop system under Level 4, which is out of the scope of human agency. The former is still an operator-in-the-loop system, in which the human must authorise engagement, characterising a Level 3 system. Hawley shows that despite the implementation of Level 4 (out-of-the-loop) and Level 3 (in-the-loop) options of human supervision, both generate basically the same results: the human follows the computer.30 Hence, algorithmic assistants at Level 3 would equalise only “formal” decision-making of the human-in-the-loop. Wagner calls this a process of “quasi-automation,” the “inclusion of humans as a basic rubber-stamping mechanism in an otherwise completely automated decision-making system.”31 Consequently, Level 3 systems cannot count as being sufficient for making a rational choice and giving room for meaningful human agency.

Let us now turn to less influential Level 2 systems (“Software provides a list of targets and a human chooses which to attack”), which simplify data by offering the soldier a set of target options. If we remember the three criteria human agency is based on (habit, imagination, and judgment), it seems unquestionable that algorithmic pre-selection of targets affects all three areas: habit will simply change based on the use of the new weapon. It is, however, not idiosyncratic to the use of automated weapon systems because every new weapon leads to some change of habit. Imagination might be limited due to over-confidence in the system or automation bias,32 making the soldier potentially less capable of using the weapon in case of a breakdown of the algorithmic assistant. Judgement could be affected depending on how the pre-selection of targets is portrayed to the soldier: if the pre-selection amounts to a ‘hypernudge’33 – only theoretically leaving the soldier room for a decision – it seems overly optimistic to believe the human agent could make a rational judgement. Using, for instance, colour-coded threat levels (red, yellow, or green) for enemies34 could amount to such a hypernudge. It follows that habit, imagination, and judgment could be substantially affected by using algorithmic assistants under Level 2, threatening meaningful human agency in the use of weapons. However, it does not follow that any use of AI in the selection of targets is unable to safeguard human agency. The way in which the pre-selection of targets is employed seems crucial in determining whether human judgment is still given.

Gal’s model is valuable in this context for identifying four different ways of AI-employment in the decision-making process: stated preferences algorithms (1), menu of preferences algorithms (2), predicted preferences algorithms (3) and paternalistic algorithms (4).35 The degree of choice is highest in the first category and decreases with additional decision-making power given to the algorithms in categories (2) to (4).

In the context of a potentially lethal targeting process by an automated weapon system, it cannot be acceptable to use predicted preferences (3) or paternalistic algorithms (4) because any such decision could deviate from IHL requirements, such as the principle of distinction or proportionality, only assuming the best for the operator. The problem with stated preferences’ algorithms (1) is that the development of such a design in the battlefield context is exceptionally complicated and would probably not bring a considerable military advantage, given that the user’s and the algorithm’s choices “completely overlap”36 on this stage. It seems more feasible and desirable to design algorithms as menus of preferences (2), helping to distinguish combatants from civilians, a crucial requirement in IHL.37 Gal correctly emphasised that “while potentially limiting choice, these menus might make the decision easier for the user.”38 Such algorithms cannot be capable of identifying whether the combatant is an immediate threat to the soldier. However, they can help the human operator in the identification process of combatants, still leaving the soldier to decide whether the elimination of the targeted is necessary and proportionate – two general principles of international law.39

In addition to the design of algorithmic targeting assistants operating on Level 2, other non-design-based factors need to be included in the evaluation of the impact on human agency. Hawley mentions in his findings on semi-automatic systems that the knowledge and training of the crew employing the system at hand play a vital role.40 It thus opens the floor for human-machine interaction or the requirements for a “dance of agency.”41 A recently published study analyses the role of trust in human-machine teaming:42 it discovers that relatively little research has been conducted on the topic of trust. Moreover, it finds that “[h]uman trust in technology is an attitude shaped by a confluence of rational and emotional factors, demographic attributes and personality traits, past experiences, and the situation at hand”.43 It can thus be concluded that reasonable use of Level 2 systems depends not only on control by design but also on control in use.

Finally, Level 1 systems (“human deliberates about a target before initiating any and every attack”) remain to be briefly analysed. Amoroso et al. themselves call it an “ideal of adhering to the strict requirements specified in Level 1 whenever possible”.44 Level 1 systems can thus be seen as the gold standard of human agency in targeting processes. Nevertheless, such systems might not create a sufficient speed and efficiency advantage contractors wish for. Hence, systems under Level 2 should not necessarily be ruled out but rather regulated by design and use.

In conclusion, this section provides us with three minimum requirements for the use of algorithmic assistants in automated weapon systems. First, “the software provides a list of targets and the human chooses which to attack” (Level 2 degree of autonomy). Second, the algorithm works as a “predefined menu of preferences” designed to assist the user in the identification process without making an actual decision. Third, human-machine interaction, as well as sufficient knowledge and trust in the system, need to be improved to enable users to spot mistakes and use such systems more effectively.

IV. Regulatory Framework for Meaningful Human Agency

General regulatory theory in the context of AI and machine-learning suggests that, with technologies “which pose systemic risks or risks of ‘deep regret’, such as to life, a prior licensing, or approval stage is usually required”;45 this should undoubtedly apply to potential lethal algorithmic targeting assistants as well.  

A. Regulatory status-quo

The issue of LAWS has internationally primordially been discussed in the CCW framework since 2014.46 High Contracting parties and other stakeholders convene annually to debate the legal, ethical, technological, and military facets. Notably, not even the scope of the regulated object is entirely clear, i.e., what ‘autonomous’ weapon systems are and whether semi-autonomous or automated systems are included.47 The ICRC used the early working definition of “any weapon system that has autonomy in the critical functions of selecting and attacking target,”48 which would not include further human-in-the-loop regulation.

In 2017, the formal Group of Government Experts (GGE) was established by the CCW Member States and tasked with drafting conclusions and recommendations for regulating LAWS. So far, the most relevant document is their guiding principles from 2019,49 endorsed by the CCW High Contracting Parties thereafter.50 The most relevant recommendation for human agency is the following:

“Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system.”51

It becomes clear, however, that these are only political principles without precise scope for the development of such weapon systems.

The GGE convened recently between 3 and 13 August 2021, but they will publish its next report only after its second meeting, taking place in October 2021. This report will then be discussed at the Sixth Review Conference of the CCW in December 2021. In the meantime, it remains to be seen whether Member States will converge in their views on the topic of autonomous and automated weapon systems. Currently, the main divide is between those who wish to entirely ban LAWS and those who wish to regulate their use. The general tendency seems to turn from a negative obligation to prohibit fully LAWS towards the positive obligation of “Meaningful Human Control” (MHC),52 which could – depending on its operationalisation – also include algorithmic targeting assistants. This concept will further be discussed in the next part.

B. Proposed Regulatory Framework of Meaningful Human Agency

In light of the weak regulation of AI-based targeting assistants, it is pertinent to set out a normatively desired framework, putting human agency at the heart of any regulation.

Considering the potential military and humanitarian benefits, this essay does not advocate for a blanket prohibition of targeting assistants.53 Instead, it attempts to specify requirements for ‘human-centred AI’ that ought to be put in place to reduce the harmful effects of algorithmic targeting assistants, particularly a lack of accountability caused by hypernudges on human agency and intent. This essay thus follows in general terms the positive obligation approach of MHC, advocated for in the CCW discussions by Belgium, Brazil, Germany, Ireland, Luxemburg, Mexico, New Zealand and Austria.54 However, it needs to be noted that “Meaningful Human Control” is not identical to meaningful human agency. The latter is the precondition for the former: only if the human agent is capable of making a meaningful choice, he or she can exercise control over a given situation. Hence, without agency, MHC would only be an empty “rubber-stamping” choice.55

Roff and Moyes first developed in 2016 requirements for MHC, again on a general abstract basis, requiring control over a tactical, operational, and strategic level.56 This model was further elaborated by the International Panel on the Regulation of Autonomous Weapons (iPRAW), requiring control in use (tactical level) and control by design (operational level) by automated weapon systems. This paper will further build upon those models and takes both control by design (IV.B.i.) and control in use (IV.B.ii.) into account.

i. Control by Design

Control by design is a necessary element for safeguarding human agency in the use of algorithmic targeting assistants in accordance with Lessig’s ‘Code is Law’-mantra. Specifically, the design of algorithms should restrict their nudging power on the human operator.

iPRAW points out that control by design should be constructed in a way “that allows human commanders the ability to monitor information about environment and system”.57 This understanding is in line with Level 2 assistants discussed in part C.II. where “the software provides a list of targets and the human chooses which to attack.” In addition, this essay has shown that there needs to be a “predefined menu of preferences” (see part C for details), designed to assist the user in the identification process without making an actual decision. Doing so is essential for accountability purposes, in which the human operator remains the agent making circumstantial decisions based on its intent. This idea ought to be implemented in the entire “targeting circle”: this includes finding, fixing, tracking, selecting, and engaging the target, as well as assessing the effects afterwards.58 A recent quantitative research found that “selecting and targeting” are the two functions mostly referred to in the literature to be regulated.59 Nevertheless, it is still important to take the entire targeting cycle into account,60 which needs to be constructed as a predefined menu of preferences. This is imperative to provide the human operator with transparent information before the actual targeting process in order to make a reasonable decision without being manipulated in the previous stages.

ii. Control in Use

The ICRC argues that control by design will not suffice to ensure a human-centred use of AI on the battlefield, a “proper consideration of human-machine interaction issues”61 is necessary.

In a NATO published paper on Human-AI Cooperation, NATO invokes the notion of “intelligent team players,” which can substantially boost human-machine performance in military decision-making.62 It addresses mainly the importance of mutual trust, for which the authors put three criteria forwards. The partners must: be mutually predictable in their actions, be mutually directable, and maintain common ground.63 What is complicated in achieving this outcome is the way in which artificial neural networks work because the black-box does not present how the algorithm comes to a given outcome; it is concealed in a ‘black box’, obstructing the development of trust. Such systems are the lowest step of human-AI collaboration, functioning only unidirectionally: from the AI to humans.64 The field of explainable AI is an attempt to make AI technologies more transparent to increase mutual cooperation. For van den Bosch and Bronkhorst, this is, however, only the second step of a bi-directional level of cooperation. They advocate for the third collaborative step in which both humans and technology “are full-fledged adaptive team members, being aware of each other’s perspective and states.”65 However, taking this thinking into account, algorithmic targeting assistants still have a long way to go before reaching step three in cooperation with their human operators. Until we get there, soldiers also need to be educated on how AI-systems work, where their bases lie and what their limits are. In the meantime, AI for targeting purposes should not be prematurely employed with potentially devastating effects not only on lethal operations but also, more broadly, on human-technology trust with long-term repercussions.

V. Conclusion

This essay has started with Lessig’s quote “Code is Law” and should also end upon this note. It was attempted to show that his idea – originally applied to cyberspace – can also be transferred to the use of automated weapon systems in the military. Previous debates about automated weapon systems largely focused on distinguishing between human-in-the-loop systems, which require a human operator to engage in targeting, and human-out-the-loop systems, which do not presuppose a human operator to act, in order to differentiate lawful from unlawful uses. However, this binary view neglects that automated in-the-loop systems highly affect human agency, as this essay has shown. Hence, regulatory frames also need to take human-in-the-loop regulation throughout the entire targeting process into account. In addition, a second sufficient condition for safeguarding human agency is control in use, developing human-machine trust in order to improve habit, imagination and judgment of the operator to comply with IHL. Thus, soldiers need to be taught to understand the grounds on which algorithmic decisions are taken and how to deal with pre-selection, as well as how to spot errors, in order to make a well-founded decision of human agency. The GGE in the CCW framework will play a crucial role in achieving such responsible regulation of automated weapon systems. It seems unlikely that a pre-emptive ban on LAWS will be achieved. However, a shift towards a positive obligation of MHC might be achieved that should be based on meaningful human agency and include human-in-the-loop regulation.

I would like to thank Professor Andrew Murray for inspiring discussions about this essay’s topic and his advice.


[1] Lawrence Lessig, Code: Version 2.0 (Second edn, Basic Books 2007), 110.

[2] For an overview of the various ways AI could be used from a European perspective see: Ulrike Franke, ‘Not Smart Enough: The Poverty of European Military Thinking on Artificial Intelligence ’ (2019) 311 European Council on Foreign Relations Policy Brief.

[3] The International Committee of the Red Cross (ICRC), ‘Towards limits on autonomy in weapon systems’ (ICRC, 9 April 2018) https://www.icrc.org/en/document/towards-limits-autonomous-weapons accessed 13 April 2021.

[4] It should be noted, however, that AI can be used for various and less contested purposes in the military, such as military tactical algorithms to identify locations for their military operations. See for example: Ashley S Deeks, ‘Predicting Enemies’ (2018) 104(8) Virginia Law Review 1529, 1557 et. seqq.

[5] The norm is of customary standard, applying to international and non-international armed conflict: ICRC, Customary International Humanitarian Law Database, ‘Rule 1. The Principle of Distinction between Civilians and Combatants’ <https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_cha_chapter1_rule1> accessed 13 April 2021.

[6] Mustafa Emirbayer and Ann Mische, ‘What is Agency?’ (1998) 103 American Journal of Sociology 962, 970.

[7] ICRC, ‘Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach (ICRC, 2019), 7.

[8] Most authors believe that human agency is pivotal to prevent any gaps in the law, see for example: Thompson Chengeta, ‘Accountability Gap: Autonomous Weapon Systems and Modes of Responsibility in International Law’ (2016) 45(1) Denver Journal of International Law and Policy 1. Some reckon that less human agency does not hinder sufficient compliance with the law: Charles J Dunlap Jr, ‘Accountability and Autonomous Weapons: Much Ado about Nothing’ (2016) 63 Temple International and Comparative Law Journal 63. Others acknowledge that the question has not fully been answered yet:  Eric T J Jensen, ‘The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict’ 96(26) International Law Studies 26, 37-48.

[9] ibid.

[10] See for example: Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015).

[11] To read more: William H Boothby, The Law of Targeting (First edn, Oxford University Press 2012), Part IV Weapons and Technologies.

[12] Judith Miller, ‘Comments on the Use of Force in Afghanistan’ (2002) 35(3) Cornell International 605, 609.

[13] International Panel on the Regulation of Autonomous Weapons (iPRAW), ‘Focus on Human Control’ (2019) 5 “Focus on” Report, 5.

[14] See for example: Boothby (n 11), 282–287. He does not go further and demands requirements to the human-in-the-loop control of autonomous (in this case of unmanned aerial vehicles) systems.

[15] Michal S Gal, ‘Algorithmic Challenges to Autonomous Choice’ (2018) 59 Michigan Technology Law Review 59, 61.

[16] ibid.

[17] Linda J Skita, Kathleen L Mosier and Mark Burdick, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991.

[18] Karen Yeung, ‘Algorithmic Regulation: A Critical Interrogation’ (2018) 12 Regulation & Governance 505, 516.

[19] Nicholas Carr, The Glass Cage: Where Automation is Taking Us (The Bodley Head 2015).

[20] Jennifer M Logg, Julia A Minson and Don A Moore, ‘Algorithm Appreciation: People Prefer Algorithmic to Human Judgment’ (2019) 151 Organizational Behavior and Human Decision Processes 90.

[21] Karen Yeung, ‘Algorithmic Regulation’ (n 18) 516, building upon her previous paper: Karen Yeung, ‘‘Hypernudge’: Big Data as a Mode of Regulation by Design’ (2017) 20(1) Information, Communication & Society 118.

[22] Richard H Thaler and Cass R Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (Yale University Press 2008).

[23] Yeung, ‘”Hypernudge”‘ (n 21) 118.

[24] Gal (n 15), 63.

[25] Barry Schwartz, The Paradox of Choice: Why More is Less: How the Culture of Abundance Robs Us of Satisfaction (Harper Collins 2005).

[26] See in a similar way: Andrew Murray, Almost Human: Law and Human Agency in the Time of Artificial Intelligence (forthcoming, copy on file with author). He builds his argument upon an argument made by Raz that “choice must be free from coercion and manipulation by others”, which is not given in algorithmic decision-making processes. 

[27] Daniele Amoroso and others, ‘Autonomy in Weapon Systems: The MilitaryAapplication of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy: a Report’ (Series on Democracy Volume 49, Heinrich-Böll-Stiftung 2018) 41 et seqq.

[28] For a comprehensive analysis under international law see: Daniele Amoroso, Autonomous Weapons Systems and International Law: A Study on Human-Machine Interactions in Ethically and Legally Sensitive Domains (Cultura giuridica e scambi internationali vol 4, Edizioni Scientifiche Italiane; Nomos 2020), 31–120.

[29] M. L Cummings, ‘The Need for Command and Control Instant Message Adaptive Interfaces: Lessons Learned from Tactical Tomahawk Human-in-the-Loop Simulations’ (2004) 7(6) CyberPsychology & Behavior 653.

[30] John K Hawley, ‘Patriot Wars: Automation and the Patriot Air and Missile Defense System’ (Center for a New American Security, 25 January 2017) <https://www.cnas.org/publications/reports/patriot-wars> accessed 7 October 2021.

[31] Ben Wagner, ‘Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems’ (2019) 11 Policy & Internet 104, 113.

[32] Logg, Minson and Moore (n 20).

[33] Yeung, ‘‘Hypernudge’ (n 21) 118.

[34] Deeks (n 4) 1560.

[35] Gal (n 15).

[36] ibid 67.

[37] Nils Melzer, ‘The Principle of Distinction Between Civilians and Combatants’ in Andrew Clapham and Paola Gaeta (eds), The Oxford Handbook of International Law in Armed Conflict (vol 1. Oxford University Press 2014).

[38] Gal (n 15) 67.

[39] Melzer (n 37) 328 et seq.

[40] Hawley (n 30) 9.

[41] Andrew Pickering, ‘The Robustness of Science and the Dance of Agency’ in Léna Soler and others (eds), Characterizing the Robustness of Science (Boston Studies in the Philosophy of Science, Springer Netherlands 2012) 317.

[62] Karel van den Bosch and Adelbert Bronkhorst, ‘Human-AI Cooperation to Benefit Military Decision Making’ (NATO Science and Technology Organization, May 2018) <https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-160/MP-IST-160-S3-1.pdf> accessed 7 October 2021.

[43] ibid 3.

[44] Amoroso and others (n 27), 43.

[45] Andrew Murray and Julia Black, ‘Regulating AI and Machine Learning: Setting the Regulatory Agenda’ (2019) 10(3) European Journal of Law and Technology 1, 2.

[46] The purpose of the convention is “to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”, already prohibiting Blinding Laser Weapons (Protocol IV 1998 in force) as well as the use of Mines, Booby-Traps and Other Devices Explosive (Amended Protocol II 1998 in force).

[47] See for a distinction of those systems: Jean-François Caron, ‘Defining Semi-Autonomous, Automated and Autonomous Weapon Systems in Order to Understand Their Ethical Challenges’ (2020) 1 Digital War 173.

[48] ICRC, Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (ICRC, 2016), 8.

[49] ‘Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (Geneva, 25–29 March 2019 and 20–21 August 2019) (25 September 2019) UN Doc CCW/GGE.1/2019/3.

[50] ‘Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects’ CCW High Contracting Parties (Geneva, 13–15 November 2019) (13 December 2019) UN Doc CCW/MSP/2019/9, para 31.

[51] Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System (n 49) 13.

[52] Frank Sauer, ‘Stepping Back from the Brink: Why Multilateral Regulation of Autonomy in Weapons Systems is Difficult, Yet Imperative and Feasible’ (2020) 102(913) International Review of the Red Cross 235.

[53] Gal (n 15) 92 also believes that a full prohibition is not the right attempt, instead, she says it needs “effective control” over the algorithm’s choices.

[54] See for the most recent analysis about the operationalisation of meaningful human control from Germany: Anja Dahlmann, Elisabeth Hoffberger-Pippan and Lydia Wachs, ‘Autonome Waffensysteme und menschliche Kontrolle’ (2021) 31 SWP-Aktuell 1.

[55] Wagner (n 31) makes this point very clear in arguing that such an empty way of meaningful human control would equalize “quasi-automation” and thus is not at all identical to meaningful human agency.

[56] Heather Roff and Richard Moyes, ‘Meaningful Human Control, Artificial Intelligence and Autonomous Weapons’ Briefing paper prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain Conventional Weapons, April 2016.

[57] iPRAW (n 21) 12.

[58] Dahlmann A and Dickow M, ‘Preventive regulation of autonomous weapon systems’ (March 2019). SWP Research Paper 3.

[59] Thea Riebe, Stefka Schmid and Christian Reuter, ‘Meaningful Human Control of Lethal Autonomous Weapon Systems: The CCW-Debate and Its Implications for VSD’ (2020) 39(4) IEEE Technology and Society Magazine 36, 42.

[60] iPRAW (n 21) 12. iPRAW argues in a similar way for systems that “allow human intervention and require their input in specific steps of the targeting cycle based on their situational understanding”

[61] ICRC, ‘Artificial Intelligence and Machine Learning in Armed Conflict’ (n 12) 10.

[62] Karel van den Bosch and Adelbert Bronkhorst, ‘Human-AI Cooperation to Benefit Military Decision Making’ (NATO Science and Technology Organization, May 2018) <https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-160/MP-IST-160-S3-1.pdf> accessed 7 October 2021.

[63] ibid 6.

[64] ibid 8.

[65] ibid.

Vanessa Vohs

BA International Relations (TU Dresden) 20′, LLM Public International Law (LSE) 21′ and Public International Law Notes Editor of the LSE Law Review Summer Board 2021

Leave a Reply

Discover more from LSE Law Review Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading