Considered as the third revolution in warfare, after gunpowder and nuclear arms, lethal autonomous weapon systems (LAWS) pose many challenges and raise many questions within state institutions and the civil society. The international community has not yet reached a consensual definition for LAWS, particularly due to a lack of unanimity concerning what autonomy genuinely refers to.  

The International Committee of the Red Cross has opted for a broad and all-encompassing definition of autonomous weapon systems, which are described as: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for, detect, identify, track, select) and attack (i.e. use force against, neutralize, damage, destroy) targets without human intervention” (Iaria, 2017). The ICRC Report on Autonomous Weapon Systems of 2014 identifies three main categories of such arms: the tele-operated systems (“directly controlled by a remote operator”), the semi-autonomous systems (“they can act independently of external control but only according to a predefined set of programmed rules”) and the genuinely autonomous systems (“they can act without external control and define their own action albeit within the broad constraints or bounds of their programming and software”). For the time being, only tele-operated and semi-autonomous systems are in use, but the militaries of various countries – in particular the United States, the United Kingdom, France, Israel, Russia, South Korea, China – are working to increase the autonomy of the latter. For instance, the US military is developing the Sea Hunter, a self-piloting vessel that would be used in submarine warfare and would be able to respect the laws of the sea according to its geographic location. Russia and the Kalashnikov industry are building the Platform-M, “a universal combat platform” endowed with an autonomous targeting mechanism (Gady, 2015). Finally, South Korea possesses the weapon that most closely meets the definition of LAWS, the SGR-A1, a sentinel robot used in the demilitarized zone between the two Koreas.

Controversiality is a key element surrounding the use of LAWS. At one end of the debate it is believed that autonomous technology can improve performance and therefore limit harm; at the other, it is postulated that no machine should be given discretion to take human life. Discussions at the global level started in Geneva in 2014 with the first Expert Meeting on Autonomous Weapon Systems under the auspices of the International Committee of the Red Cross and the United Nations. Discussion was formalized in 2016 with the creation of a Governmental Group of Experts, with the mandate of examining the problematics related to increased autonomy of emerging new weapons. Nevertheless, States have divergent positions vis-à-vis the use of LAWS in warfare. This is particularly salient within the European Union. Indeed, the European Parliament has called for “[banning] the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention” (Weizmann, 2014). This position is particularly endorsed by states such as Austria, the only European state part of the NGO Campaign to Stop Killer Robots. Other European states, especially France and the United Kingdom, have more ambiguous attitudes. France underlined that “it is necessary to bear in mind that the technologies in question are of a dual nature, and that they may have many civil, peaceful, legitimate and useful applications”, hence states should not try to limit research in this field (Weizmann, 2014). Globally, a handful of states [1] are reluctant to start negotiations on LAWS, especially because they fear the risk of subscribing to a binding treaty while their opponents will not do the same (Bordron, 2018). Opposed to them, the majority of states members of the Non-Aligned Movement are calling for a legally-binding instrument to prohibit LAWS. Twenty-eight countries took part to the NGO Campaign to stop Killer Robots, including regional powers like Egypt or Pakistan and virtually all Latin American countries [2]. China is in favour of banning the use of fully autonomous weapons, but not their development or production.  

Within such debate, the ICRC has stated that “major concerns persist over whether a fully autonomous weapon could make the complex, context-dependent judgements required by international humanitarian law” and that this “represents a monumental programming challenge that may well prove impossible to achieve” (ICRC, 2014). It has called for a legal review of any new weapon with autonomous features to ensure that all use of such systems can comply with international humanitarian law (IHL).  

IHL underlines, through art.36 of the 1977 Additional Protocol I to the Geneva Conventions, that  

in the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.  

Modern societies face the emergence of LAWS in a context of legal vacuum for governing such weapons; art.36 is thus paramount to ensure the compliance of weapons’ use with existing international obligations.  

Among the obligations that art.36 requires to respect, there is the Principle of Humanity. The latter is enshrined both in the St. Petersburg Convention of 1868 and in the Hague Convention II of 1899, via the so-called Martens’ clause. The former, in its preamble, affirms that “the employment of arms which uselessly aggravate the sufferings of disabled men or render their death inevitable would be contrary to the laws of humanity”. Similarly, the Martens’ Clause states that “In cases not covered by this Protocol or by any other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from dictates of public conscience.” In such a debate and from an ethical point of view, some legitimately ask if lethal autonomous weapon systems, that carry in their very name the adjective lethal, would respect these dispositions and if public conscience could apply to machine warfare (ICRC, 2016).  

Moreover, in order to be considered as lawful, it is important that LAWS respect the Principle of Distinction, embedded in the art. 50, 51 and 52 of the Additional Protocol I of 1977. Firstly, LAWS should be able to discriminate between a civilian object and a military objective. A military objective is one that provides effective contribution to the military action of one party to the conflict, and whose destruction represents a military advantage for the other belligerent. Current semi-autonomous weapon systems can be programmed to distinguish simple objects that are military objectives in nature (e.g. tanks), in non-complex and static environments. Difficulties arise when machines face objects that are civilian in nature and that become military objectives due to the dynamic environment of war. Expressing both an ‘effective contribution to military action’ and a ‘definite military advantage’ in algorithmic form might prove challenging since it requires assessing contextual elements that vary with circumstances (ICRC, 2014). Secondly, LAWS should be able to distinguish between a civilian and a combatant, and engage only the latter. Current technologies are able to recognize soldiers that carry symbols (e.g. uniforms) displaying their belonging to the military sector. But could an autonomous weapon differentiate between a uniformed combatant and a uniformed civilian? Moreover, military operations are increasingly shifting into civilian populated areas and witness civilians increasingly involved in the hostilities. With fighters often not wearing distinctive attires or emblems, there could be major difficulties for LAWS in assessing who can be lawfully targeted. A third critical distinction is the one between an active combatant from one who is hors de combat. Recognizing whether a person is hors de combat requires perceiving a person’s intentions and behavior. IHL engages responsibilities for the party to which a combatant is surrendering or for a party facing a person hors de combat (i.e. treating the wounded, protecting them from danger, distancing them from the battlefield etc...) With the current technological means, would it be practically possible to program an autonomous weapon system able to discharge such responsibilities? Would the machine understand our will for capitulation?  

Furthermore, the Additional Protocol I introduces the idea of doubt. It creates a presumption of civilian status in cases of doubt, both vis-à-vis an object and a person. Algorithms that would enable an autonomous weapon system to compute doubt are possible in theory using probabilities. What will remain difficult is determining the threshold of uncertainty at which an autonomous weapon system would need to refrain from attack and whether last minute doubts are granted, given the high speed of these machines and the small lapse of time between selection and attack of a target (ICRC, 2014).  

Finally, the ultimate question related to LAWS in the legal domain is the one of accountability: who is to be held responsible for wrongful acts carried by a self-governing machine? There will always be a level of uncertainty about the way autonomous systems will interact with the external environment, because their adaptable nature would make their performance difficult to predict. It would therefore make it challenging to effectively control the weapon system’s actions or hold anyone accountable for their unpredictable behavior. In addition, it is uncertain whether commanders or operators would have the necessary knowledge or understanding to grasp how an autonomous weapon system functions. “Under IHL and international criminal law, individuals are criminally responsible for war crimes they commit. They may also be held responsible for attempting, assisting, facilitating, planning or instigating the commission of a war crime” (Weizmann, 2014). Individuals who deploy an autonomous weapon system that carries out acts amounting to crimes under domestic or international law could therefore be criminally liable. “It would nevertheless be hard to prosecute such individuals successfully, because it would be necessary to prove, under the mens rea arguments, that they intended to commit the crimes in question, or knew that they would be committed” (Weizmann, 2014). Another argument states that IHL was originally designed as part of a system governing relations between states. Acts by a state’s military or police forces are therefore attributable to the state, and any violation of the state’s international obligations will engage its international responsibility. Thus, a state that deploys an autonomous weapon system that violates international law could be held responsible for such violation (Weizmann, 2014).  

To avoid any responsibility gap, it has been proposed that accountability could be assigned in advance, along with a requirement to install recording devices on the autonomous weapon systems to review footage of lethal uses (ICRC, 2014). The transparency that would be enabled by such an electronic trail would be useful before a court and could help prove the lawfulness or unlawfulness of the weapon systems’ operations. This, in turn, would reinforce the credibility of IHL. Another option would be to distribute responsibility among the different actors along the chain from programming to deployment. Such an approach may, however, violate the customary IHL rule stating that no penalty can be inflicted on persons for acts that they have not personally committed.  

States need to update International Humanitarian Law in the light of the emergence of such new autonomous weapons, so to govern the situation already described in 1959 by Günter Anders: “the technification of our being has changed the very foundations of our existence. Thus, we can become ‘guiltlessly guilty’, a condition which had not existed in the technically less advanced times of our fathers” (Anders, 1959).

Footnotes

[1] Australia, Belgium, France, Germany, Israel, Republic of Korea, Russia, Spain, Sweden, Turkey, United Kingdom, United States (Campaign to Stop Killer Robots).  

[2] Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, El Salvador, Ghana, Guatemala, Holy See, Iraq, Mexico, Morocco, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, Zimbabwe (Campaign to Stop Killer Robots).  

References

G. Anders, L’ultima vittima di Hiroshima, Mimesis Edizioni, 2016 (first edition in 1959)

M. Bordron, “Armes Léthales Autonomes: la doctrine des Etats”, France Culture, 2018 (online).

Gady F.S., “Meet Russia’s New Killer Robot”, The Diplomat, 2015 (online).

Iaria A., “Lethal Autonomous Weapon systems and the Future of Warfare”, IAI Commentaries, 2017 (online).

ICRC, “Autonomous Weapons Systems - Technical, Military, Legal and Humanitarian Aspect”, Expert Meeting, 2014 (online).

ICRC, “Autonomous Weapons: What role for humans?”, 2014 (online).

ICRC, “Convention on Certain Conventional Weapons - Meeting of Experts on Lethal Autonomous Weapon Systems”, 2016 (online).

N. Weizmann, “Academy Briefing n.8 - Autonomous Weapon Systems under International Law”, Geneva Academy of International Humanitarian Law and Human Rights, 2014 (online).

Acknowledgements

Cover image: from Alexander Nevsky, by Sergei Eisenstein, 1938.