Yale Journal of International Affairs

View Original

When Robots Go To War: The Future of Unmanned Conflict


Unmanned U.S. Navy X-47B aboard the aircraft carrier USS George H.W. Bush. Wikimedia Commons.

By Aiden Warren and Alek Hillas


That landscape of war and how it is conducted is changing exponentially. For the first time in history, humankind is confronted with the prospect that autonomous robots may join the battlefield. They will come in all shapes and sizes. Some of these machines will be fighting under the same flag as a nation-state, while others may be enemy combatants with no fixed address or fear of death. Indeed, the advent of LAWS is an emerging game changer for the next generation of soldiers, seamen, airmen, marines, and politicians. The likely disruptive effect of the singularity, which may broadly be defined as a technology-driven revolution that will impact almost all aspects of society including law, science, and even philosophy, will raise questions about the capacity of machines with artificial intelligence (AI) or artificial consciousness (AC) to exercise moral agency.

In unpacking this evolving area in security, this article begins by outlining what policymakers and others will be confronted with in response to the next emerging Revolution in Military Affairs (RMA).[1] It similarly assesses what they need to consider in response to the rise of lethal robotics, which has forced the U.S. military to move toward its “Third Offset Strategy.”[2] The article then proceeds to examine the oft-stated but under-researched proposition that robots could one day obtain moral agency and examines whether reform within the U.S. military justice system would be possible to regulate robot personhood. This finding is compared with the current status of Military Working Dogs (MWDs), which are clearly alive and conscious, but do not have individual responsibilities that are subject to punishment. Overall, while these areas of concern are not an exhaustive list of the discipline of lethal robotics, they nonetheless represent a meaningful contribution to the debate, in what can be seen as a defining juncture in international security.

Uses of Force Complexities presented by LAWS

Controlled remotely from virtually anywhere on the planet, drones have come to the forefront in the military suite and contribute significantly to what has been termed as the further “dehumanization of death.”[3] Indeed, referred to as unmanned aerial vehicles (UAVs), remotely piloted vehicles (RPVs), unmanned military systems (UMS) or simply, ‘drones,’ these new forms of technology illustrate that “we are currently in the midst of an epochal transformation” in violence,[4] that is in response to a “new species of war.”[5] However, existing armed drones are only the precursors to what are termed as autonomous robotics; devices that could choose targets without further human intervention once they are programmed and activated. While this may appear somewhat exaggerated, the Pentagon is already planning for them, “envisioning a gradual reduction by 2036 of the degree of human control over such unmanned weapons and systems, until humans are completely out of the loop.”[6]

Proponents claim that LAWS will save lives. As machines acting under the control of a computer program, LAWS would not exhibit any desire to rape civilians or torture prisoners of war, nor shoot an unidentified “target” out of fear, only to discover it is, in fact, an innocent non-combatant. They also believe that LAWS will provide the state with a greater capacity to defend itself through force projection and force multiplication, and thereby enable politicians to justify the loss of a robot, rather than a soldier, to the public and media.[7] Conversely, opponents warn that the introduction of LAWS would make it easier for states to enter into a war; lowering the threshold to go to war but also perhaps normalizing war as an alternative to diplomacy. Additionally, critics contend that machines lack moral agency, meaning that the same standard of criminal accountability and “justness” of punishment, as expected of humans, would be difficult for robots to execute. Moreover, unlike drones, LAWS would not be under the direct control of a human, which could make war less humane and less dignified. While no one is exactly sure how these robots of the future will emerge, or what actions they might be able to perform, roboticists, soldiers, politicians, lawyers and philosophers must ask some very complex and interdisciplinary questions regarding the future use of LAWS.[8]

Regulating LAWS in the U.S. Military Justice System

As a new and untested RMA, LAWS will present immense challenges to those involved in the U.S. security and policymaking domain, particularly on the extent to which the United States is capable of regulating robots with strong Artificial Intelligence (AI) or Artificial Consciousness (AC). Computers are changing from essentially being calculators constrained by the von-Neumann architecture, to having artificial “brains” with neurons and synapses working in parallel to feed off real-time sensory information like hearing and vision (if the latest chip from IBM, developed with funding from the U.S. Defense Advanced Research Projects Agency (DARPA), is any indication).[9] Could these computer “brains” think and act like ours? Several scenarios are possible, each of which can be mapped to a timeline of projected technological developments. In the short-term, the law will continue to regard robots as weapons like any other (the status quo). As the complexity of human-machine teaming increases and humans begin to delegate more decisions to robots on the battlefield (corresponding with the “Third Offset Strategy” scenario), accountability would remain with humans but may become distributed among multiple people working across weapon systems. In the distant future, humans may cede responsibility to robots for certain actions where robots have demonstrated moral agency. At present, U.S. military law reform is limited in its capacity to regulate robots as moral agents, meaning that humans will maintain responsibility for both human and machine actions in the foreseeable future.

Given the rise of drones and the advent of technology in the form of robots, managing their operations or potential contravention of law will require some new form of regulation to ensure that legislation does not become outdated. If, for instance, robots were to be put on court-martial, either the text of the UCMJ would need to change through legislation, or the president would need to issue an Executive Order for the reinterpretation of the UCMJ in the MCM. One possible pathway for reform would be for the Joint Service Committee (JSC) on Military Justice to follow changes in the jurisprudence of the civilian legal system. DOD Directive 5500.17 states that the JSC’s review of the MCM, “applies, to the extent practicable, the principles of law and the rules of evidence generally recognized in the trial of criminal cases in U.S. [civilian] district courts, but which are not contrary to or inconsistent with the UCMJ.”[10] In effect, changes regarding the status of robots in U.S. district courts could precipitate a re-interpretation of the meaning of “person” within the military justice system. Aside from its own internal review mechanism, the JSC can also receive amendment proposals from the public.[11] However, the overall effectiveness of the committee is, “difficult to judge,” because its recommendations remain largely inaccessible to public scrutiny.[12] An alternative pathway to review and reform an aspect of the military justice system would be for, “congressional pressure,” to lead to the creation of an ad-hoc panel, as has been done in the past.[13]

However, changes to the status of robots in civil law, such as endowing the right to uphold contracts, would not alter the status of robots in criminal law because the two are wholly separate; with the criminal justice system being the purview of the JSC.[14] It is highly likely that, at some stage, robots will be entitled to legal personhood when their actions affect certain (existing) legal persons but are not deemed to be a transgression against the whole of society. Just as people eventually became accustomed to the notion of a business having its own legal personality several centuries ago,[15] we may grow to accept the notion of robotic contractual rights and responsibilities in the form of a peculium, as has been suggested by Ugo Pagallo.[16] This goes much further than treating robots as mere objects of property that can stand in court on their own, but have no kind of obligation toward others in society and are not entitled to constitutional protections.[17] While it will no doubt take a generational shift for policymakers and ordinary people to become accustomed to the idea of robots having their own responsibilities, entailing legal guarantees and consumer protection, the public is likely to encounter court cases involving self-driving cars in the coming years,[18] without necessarily placing the issue into its larger theoretical and historical context.

In the short term, lethal robotics will most likely be considered as ‘soulless’ machines without moral agency, which will present its own set of problems and complexities. Indeed, if autonomy is distributed between a human and machine, then responsibility could likewise be divisible. According to Andrea Omicini’s presentation at the 2015 Meeting of Experts on LAWS, in socio-technical systems, “the agent abstraction typically accounts for both human and software agents.”[19] As such, a lack of understanding surrounding distributed autonomy could lead to “uncertain responsibility,” and therefore unclear liability in the international legal system.[20]

In considering the above discussion, it is worth emphasizing what the paradigm shift in thinking would require at a policy level, particularly the complexities associated with the JSC or an ad hoc committee arguing in favour of robot personhood. Lawful permanent residents who serve in the armed forces of the United States today can be deported if they are found guilty of criminal misconduct in a court-martial.[21] The compulsory expulsion of non-citizens would, presumably, place autonomous robots in a precarious position. Japan’s nascent and largely experimental experience of granting special residency permits to robots is hard to imagine taking a foothold in Western societies.[22] If a robot were found guilty of criminal misconduct and didn’t have citizenship in another country, to where could it be deported and how would this process differ (if at all) from the current treatment of stateless persons?[23] The execution of a robot with moral agency would potentially be illegal for non-capital offenses. Moreover, consider the potential issues surrounding International Humanitarian Law (IHL) that are bound to arise in a war where one belligerent country recognised robot rights, but the other didn’t. This, and many other issues that may appear somewhat trivial at first, actually reveal the depth of change that reform-minded policymakers would need to consider, before a robot could be subject to the court-martial process or a human soldier could themselves be court-martialled for mistreating a robot.

Case Study: Considering the Status of Military Working Dogs (MWDs) Today

Although it is difficult to theorise the status of autonomous robots as legal persons when none currently exist, MWDs present an interesting case study – to which this section now turns – due to their similarities with unmanned weapons. This discussion takes off from where Pagallo has canvassed whether a robot mostly resembles a corporation, a pet animal, a child, or a household appliance and attempts to place the discussion into a military context.[24] MWDs and LAWS share philosophical questions regarding moral agency and appropriate forms of regulation.[25] MWDs are also force multipliers.[26] Further, legal experts recognise the utility of MWDs as autonomous or semi-autonomous weapons,[27] and the requirement for handlers to keep oversight of their dog at all times bears similarity with the need for meaningful human control over LAWS. For example, U.S. Army Regulation 190-12 states that canines, “will not be used for crowd control or direct confrontation with demonstrators, unless the responsible commander determines this use is absolutely necessary. When used for crowd control or direct confrontation, dogs will be kept on a short leash to minimize the danger to innocent people. Dogs will not be released into a crowd.”[28] Here, such strict regulations could be under threat if autonomous systems were utilised for crowd control purposes instead, which raises the question over whether the human control of MWDs prevent canines from being recognised as members of the armed forces in their own right.[29] Or, rather, whether it is their species’ lack of intelligent thought that does not allow for MWDs to uphold responsibilities and obligations like humans.

MWDs make large contributions to the work of the armed forces, but it is only their actions, rather than their species, that receives recognition. On average, each dog saves the lives of 150 human soldiers.[30] These days, the respect toward MWDs is best illustrated by the fact that dogs are always given a higher ranking than their handler, and it was no exaggeration when General Petraeus claimed in 2008 that “the capability that military working dogs bring to the fight cannot be replicated by man or machine.”[31] Under U.S. law, MWDs can be recognised for their service when they perform an “exceptionally meritorious or courageous act.”[32] However, none of these canine achievements have translated into formal recognition of the dogs themselves. Only one dog has even been awarded a medal, and even that was later revoked.[33] Memorials to MWDs are not officially recognised, either.

Further, an attempt to officially reclassify MWDs as “Canine Members of the Armed Forces” was stifled in Congress,[34] and MWDs are still considered “equipment” under the law.[35] This, despite growing support from the veterinary community that MWDs can experience similar psychological issues as people; disturbingly, more than five percent of MWDs suffer from Canine Post-Traumatic Stress Disorder.[36] In Britain, some police dogs even receive a pension after retirement, which pays for three years’ worth of medical care.[37] Indeed, millennia of evolution in social groups alongside humans have instilled dogs with a higher level of intelligence than most other animals, even to the extent that they might, “have a level of sentience comparable to that of a human child.”[38] In sum, while dogs are alive and conscious, and much beloved for their lifesaving work, the standard for moral agency is so high that MWDs do not even come close to having capacity to share in the legal responsibility for their actions.

While MWDs obviously cannot stand accused at courts-martial, their handlers can. One of the most well-known of these cases involved Army Sergeant Michael J. Smith, a military police officer stationed at Abu Ghraib. Smith was found guilty on six counts, having forced his dog to bite a detainee, remove a bag off a detainee’s head, and lick peanut butter from the genitals of other military police officers (even so, the two dog handlers convicted at Abu Ghraib received light sentences).[39] Despite Smith’s belief that he was acting with the authorisation of military intelligence,[40] the presence of an unmuzzled dog during interrogations was illegal under Operation Iraqi Freedom Combined Joint Task Force-7 policy.[41] In United States v. Smith, the Court of Appeals for the Armed Forces upheld the ruling, finding that the detainees under Smith’s orders were entitled to the appellant’s protection, but had been subjected to “cruelty and maltreatment.” In the ruling statement, Judge Baker said, “We hold that Article 93, UCMJ, applies to detainees in U.S. custody or under U.S. control, whether they are members of the U.S. armed forces or not.”[42]

Perhaps with a view of correcting this ambiguity, the U.S. Department of Defense issued Directive 5200.31E, “DoD Military Working Dog (MWD) Program,” in 2011. The Directive states, “within the context of the lawful use of a MWD, appropriate rules regarding the use of force shall be promulgated for each specific use of a MWD … [Personnel must] not use any MWD as part of an interrogation approach or to harass, intimidate, threaten, or coerce a detainee for interrogation purposes.”[43] The text goes on to reference DOD Directive 3115.09, which has almost identical wording.[44] Despite being reasonably foreseeable without the need to go to trial first, these changes took time and bitter experience for the DOD to implement. Given the amount of training, bonding, trust, and control that handlers have with their dogs, and the regulations preventing others from doing so, each handler is personally responsible for the actions of his or her MWD.[45] The policy link with autonomous systems is that an MHO should not use robots for the purposes of torture, maltreatment, committing an indecent act, etc. It may also be necessarily for someone to oversee the overseer in real time, rather than review their actions after an incident, when it is already too late. Therefore, there should be an extra level in the command structure allowing for continuous oversight over MHOs that has not possible with MWD handlers so far, which will become available with the advent of trackable robotics. In this respect, the military can and must learn from the prior experiences of MWD teaming, when making policy decisions and designing training manuals for human-machine teaming in the future.

MWDs are an appropriate case study to demonstrate that being alive and conscious is, quite simply, not a prerequisite for a non-human to have moral agency under the law. Indeed, a detailed analysis of U.S. military justice law has revealed the extreme unlikelihood that robots could be treated as such in our lifetime, and that it would be inappropriate to, “put a robot in jail until the battery runs flat,” as stated caustically by Heyns.[46] Not all of the policies surrounding Military Working Dogs Teams are directly transferable to LAWS, but, perhaps, some of the lessons learned are. MWDs require a handler, just as LAWS would require an MHO. In order to minimise criminal activity within the military, it will be necessary to learn from the past mistakes in MWD policy and ensure that MHOs, as the overseers of LAWS, are themselves subject to oversight from someone else. More likely than not, doing so would actually stand to benefit the MHOs by raising the expected standard of conduct and ensuring that poorly trained staff are not allowed to continue to be derelict in their performance of duties. This would not necessarily absolve manufacturers, military strategists or politicians from blame, but they are shielded to some extent compared to the MHO, who will watch the action unfold in real time.

In considering the mechanism for the MHO to observe and control the actions of LAWS, policymakers must ensure the mission commander is at all times capable of maintaining Meaningful Human Control over the actions of multiple systems; the MHO must be able to order LAWS to cease any action, just as soldiers can be contacted by radio and told not to proceed with their mission today. This will require close engagement with engineers, particularly as one of the most advantageous uses of autonomous systems would involve sending UMS and RGVs into remote areas, and UAVs into GPS-denied environments, with little possibility of communication with the MHO. While policymakers believe it will be technologically possible and therefore desirable from a force multiplication perspective for a large number of LAWS to swarm together,[47] the human tasked with oversight may struggle to keep up with all systems simultaneously; one research project envisions a machine-to-human ratio of at least six to one.[48]

Rather than allowing programmers and engineers to drive technological possibilities in machine autonomy to the extreme limits of everything except for legal accountability, policymakers should be far more concerned about maintaining the threshold level of human oversight over LAWS. After all, an MHO will not be able to read a robot’s mind. While military strategists will no doubt be tempted with the allure of being on the cutting-edge, they must also remember not to push their staff over the edge of the proverbial cliff. Danièle Bourcier has told the 2016 United National Convention on Certain Conventional Weapons (CCW) Meeting of Experts that “researchers must ensure that robotic decisions are not made without operator’s knowledge, in order to avoid gaps in the operator’s situational awareness.”[49] Otherwise, as Pablo Kalmanovitz noted during that same conference that “while in principle possible to attribute responsibility fairly to human beings, in practice it may require defining specific roles within a complex web of industrial-military production and use. Unless deploying states are willing to define certain roles as those responsible for impact of LAWS, the nightmare of unaccountable robots killing innocent civilians may become reality.”[50]

In summary, even though the creation of criminal robot personhood is extremely unlikely, placing an MHO in a position to monitor multiple LAWS in real time could fulfil the ethical requirement of having a “human in the loop” whilst taking full advantage of force multiplication-enabling technologies. However, with each group of systems potentially operating across different kill boxes, the mission commander could feel overwhelmed due to different scenarios and may face pressure to approve engagements in short timeframes – perhaps even within seconds. At the moment when the human feels more “on” than “in” the loop, the responsibilities of human-machine teaming will have become more evenly distributed; the human is no longer in full control and they will trust the robots on the extent of their training and prior experiences, remaining far more exposed to the unpredictability of machine error (in both an operational and legal accountability sense) than anything witnessed so far.

Conclusion

As one of the most challenging junctures in global security, increasingly autonomous systems threaten taking humans out of the loop completely. This presents a potential RMA by initially linking the state’s engineering and computing prowess with force projection and force multiplication capabilities. Yet, as has been evident from the proliferation of armed drones so far – which once belonged to only a handful of powerful states only a decade ago, including the U.S. – combat UAVs have proliferated rapidly since. There is no doubt that, if used, LAWS will bring unmanned warfare to a whole new level. Along with the spread of uninhabited technologies thus far, it is foreseeable that lethal robotics will pose new security challenges and greater complexities. In responding to this threat, states must consider approaches that would seek to implement new forms of regulation to manage the uses of LAWS.

While there is no “legally-binding” definition of LAWS to date,[51] nascent experiences with Automatic Weapon Defence Systems (AWDS), which are capable of firing without express human consent but remain in fixed locations, illustrate just how far-thinking policymakers and manufacturers must become, to catch up with the state of technology today.[52] The view of the United States delegation to the CCW Meeting of Experts is that LAWS are “future weapons … not current weapon systems.”[53] Although seeking to differentiate between the various ranges of the autonomy spectrum is useful in a scientific sense, diplomatically the U.S. position is far less nuanced, insofar as a ban on future weapons would potentially “lock in” any weapons currently under development or in the military’s arsenal, thereby excluding others from developing more sophisticated technology in the long-term.

Moreover, the development of lethal autonomy places MHOs in the difficult position of potentially needing to approve multiple engagements within a short timeframe; mission commanders may find themselves unlucky enough to be used as a scapegoat for the distribution of legal accountability, because robots are not moral agents. While unfair, this is actually the simplest scenario. It would be far more complicated to grant robots moral agency; this delegation of individual responsibility to machines would have far-reaching consequences outside of the battlefield, too. For example, Heyns has warned that if robots are intelligent enough to stand trial one day, there would be little to prevent them from finding employment as a judge the next.[54] Some have considered whether it is possible to conduct an empirical test for these purposes.[55] Yet this is simply not the case under the present application of U.S. military law; the UCMJ does not allow for the courts-martial of robots. As a general rule, it will be difficult to argue that a robot is a moral agent in some cases but not others, unless an entirely new set of criminal codes is created to distinguish between human rights and robot rights. No matter how impressive it would be to create and prove the existence of non-organic ‘life’ in the form of a robotic system, this is quite simply irrelevant when assigning moral agency. As a clear and applicable example, Military Working Dogs fulfill both parts of the criteria of being alive and conscious, but fall short of exercising moral agency because they are incapable of intelligent, human thought. Consequently, MWDs are not defined as “persons” under military law and cannot be court-martialled for their actions. Further, past uses of MWDs in criminal activity and violations of IHL serve to highlight the need for mission commanders of an autonomous and semi-autonomous weapons systems to be subject to oversight. This process of constantly reviewing the MHO’s actions will also raise the standard of conduct, thereby putting measures in place to limit unforeseen mismanagement, abuses and human rights violations.

Overall, this article has sought to address the research question, how do lawmakers and policymakers in the United States envisage responding to the advent of LAWS? In examining the difficulty of ensuring that an autonomous system will observe IHL, or the judicial process, it is evident that policymakers could face one of their biggest challenges yet. As an academic field, lethal robotics encompasses the disciplines of law, philosophy, psychology, engineering, military strategy and international relations. In the previous RMA, nuclear weapons almost led (and could still lead) to the indiscriminate and disproportionate deaths of billions of people. Despite these being inherently abhorrent weapons, the NPT is still a long way away from having the nuclear states disarm and move to “zero.” Perhaps this is the real lesson for LAWS. In response to the challenges identified in this article, policymakers in the most powerful and technologically-advanced states must judge whether their collective and competing national interests are best served in the long term by placing certain limitations on lethal autonomy. In doing so, they must exercise their own version of meaningful human control over the allure of technological determinism and actually define the future of warfare so as to mitigate conflict, than making it an easier and tempting option.


About the Authors

Dr Aiden Warren is a Senior Lecturer and Researcher in the School of Global, Urban, and Social Studies at RMIT. His teaching and research are in the areas of International Security, U.S. national security and foreign policy, U.S. politics (ideas, institutions, contemporary and historical), international relations (especially great power politics), and issues associated with weapons of mass destruction (WMD) proliferation, non-proliferation and arms control.

Alek Hillas is a research assistant in the School of Global, Urban, and Social Studies at RMIT, where he graduated with a first class honors degree in International Studies. His research interests are in global security, international political economy, lethal robotics, and cross-cultural communication.


Endnotes

  1. Frank Kelley, Deputy Assistant Secretary of the U.S. Navy for Unmanned Systems, has recently identified the next RMA as encompassing more than just technological change: “Fully integrating human and unmanned systems is as much a military cultural evolution as a technological evolution.” Frank Kelley, “Realizing the Robotic & Autonomous Systems Vision,” in 2016 Ground Robotics Capabilities Conference, March 02-03 (Springfield, VA: Defense Technical Information Center, 2016), 13.

  2. According to Melissa Flagg, Deputy Assistant Secretary of Defense for Research, the “First Offset Strategy” placed an emphasis on nuclear deterrence; the “Second,” on technology to overcome the enemy’s numerical advantages; and the “Third” will involve human-machine teaming and the delegation of decisions to machines in time-critical situations. Melissa Flagg, “DoD Research and Engineering,” in 2016 Ground Robotics Capabilities Conference, March 02-03 (Springfield, VA: Defense Technical Information Center, 2016), 5-7.

  3. Strawser, Bradley Jay. 2013. “Introduction: The Moral Landscape of Unmanned Weapons,” in Bradley Jay Strawser (ed.) Killing by Remote Control: The Ethics of an Unmanned Military (Oxford: Oxford University Press) 3.

  4. Latham, Andrew A. and James Christenson, “Historicizing the ‘New Wars’: The Case of Jihad in the early years of Islam,” European Journal of International Relation 20, no. 3 (2014): 767.

  5. Mundy, Jacob, “Deconstructing civil wars: Beyond the new wars debate,” Security Dialogue 42, no. 3 (2011): 280.

  6. Garcia, “The Case Against Killer Robots.”

  7. For an historical analysis of support for the United States’ involvement in wars based on polling trends, see Adam J. Berinsky, In Time of War: Understanding American Public Opinion from World War II to Iraq (Chicago, IL: University of Chicago Press, 2009).

  8. Kendall Haven, cited in Peter W. Singer, “The Ethics of Killer Applications: Why Is It So Hard To Talk About Morality When It Comes to New Military Technology?” Journal of Military Ethics 9, No. 4 (2010): 301.

  9. Judith Hurwitz, Marcia Kaufman, and Adrian Bowles, Cognitive Computing and Big Data Analytics (Indianapolis, IN: John Wiley & Sons, 2015), 247-248. See also, Filipp Akopyan et al, “TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 34, no. 10 (2015): 1537-1557.

  10. U.S. Department of Defense, Directive Number 5500.17 – Role and Responsibilities of the Joint Service Committee (JSC) on Military Justice (Washington, DC: U.S. Department of Defense, 2003,) E2.1.1.3.

  11. Joint Service Committee on Military Justice, “Homepage,” U.S. Department of Defense, http://jsc.defense.gov/ (accessed September 28, 2015).

  12. Brooker, “Improving Uniform Code of Military Justice (UCMJ) Reform,” 33.

  13. Ibid, 45.

  14. Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Dordrecht, Netherlands: Springer, 2013).

  15. See David McBride, “General Corporation Laws: History and Economics,” Law and Contemporary Problems 74, no. 1 (2011): 3.

  16. Ugo Pagallo, “Robotrust and Legal Responsibility,” Knowledge, Technology & Policy 23, no. 3 (2010): 375. See also, Pagallo, The Laws of Robots.

  17. “Civil forfeiture is a legal fiction that enables law enforcement to take legal action against inanimate objects … Civil forfeiture actions are in rem proceedings, which means literally ‘against a thing’—the property itself is charged with a crime. That is why civil forfeiture proceedings have bizarre titles, such as United States v. $10,500 in U.S. Currency or People v. Certain Real and Personal Property. And because they are civil proceedings, most of the constitutional protections afforded criminal defendants do not apply to property owners in civil forfeiture cases.” Scott Bullock, “Foreword,” in Policing for Profit: The Abuse of Civil Asset Forfeiture, Marian R. Williams et al, 9-10 (Arlington, VA: Institute for Justice, 2010).

  18. From the perspective of a tech company based in Silicon Valley (itself a legal person), the goal to minimise programmers’ liability could lead to lobbying around legislation to permit the incorporation of self-driving cars. That would mean that the ‘car’ buys insurance in case it needs to make payments after an accident. Given that many states in the United States are withholding legislation on self-driving vehicles until the establishment of more rigorous liability schemes, it will be reasonable for people to question whether autonomous vehicles should have their own legal entity, or merely constitute products that fall under the manufacturer’s guarantee. Regardless of the outcome, placing artificial legal persons into the national conversation would instil a degree of normality to the idea. See David C. Vladeck, “Machines Without Principals: Liability Rules and Artificial Intelligence,” Washington Law Review 89, no. 1 (2014): 125, 129. See also, Jeffrey K. Gurney, “Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles,” University of Illinois Journal of Law, Technology & Policy 13, No. 2 (2013): 249-250, 252. Recent crashes involving Google’s self-driving car hitting a bus, and the fatal collision of a Tesla vehicle set on Autopilot with a truck, both indicate that these technologies are imperfect.

  19. Andrea Omicini, “The Distributed Autonomy: Software Abstractions and Technologies for Autonomous Systems,” in 2015 Meeting of Experts on LAWS at the United Nations Office in Geneva, 13-17 April (Geneva: United Nations, 2015), 4.

  20. Omicini, “The Distributed Autonomy,” 10.

  21. 30,000 permanent residents were on active duty in 2004, and, strangely, the citizenship of a further 9,000 people was listed as “unknown.” Richard D. Belliss, “Consequences of a Court-Martial Conviction for United States Service Members Who Are Not United States Citizens,” Naval Law Review 51 (2005): 53-54.

  22. The Japanese government seems to conflate robot citizenship with ethnic nationalism in the form of granting robots ‘special residency permits’ that are unavailable to ‘foreigners’ who would be classified as either real citizens or permanent residents under the laws of most other countries. Jennifer Robertson, “Human Rights vs. Robot Rights: Forecasts from Japan,” Critical Asian Studies 46, no. 4 (2014): 592.

  23. Little research exists on the intersection between robots and statelessness. As a starting point for comparison, stateless persons cannot be held under indefinite detention due to the Supreme Court ruling in Zadvydas v. Davis (2001).

  24. Pagallo, “Robots of Just War,” Philosophy & Technology 24, no. 3 (2011): 308.

  25. Karsten Nowrot, Der Einsatz von Tieren in bewaffneten Konflikten und das Humanitäre Völkerrecht (Halle Saale, Saxony-Anhalt: Institut für Wirtschaftsrecht der Martin-Luther-Universität Halle-Wittenberg, 2014), 21-22.

  26. “The MWD teams are force multipliers.” U.S. Army, Army Regulation 190-12 – Military Working Dogs (Washington, DC: Department of the Army, 2013), 4-5c.

  27. “After conducting a thorough analysis of the relevant LOW principles and binding treaty law, the JA [Judge Advocate] should be able to recommend using a MWD as a lawful means of non-lethal force to apprehend an enemy combatant.” Charles T. Kirchmaier, “Unleashing the Dogs of War: Using Military Working Dogs to Apprehend Enemy Combatants,” The Army Lawyer 36, no. 10 (2006): 8, 10-12.

  28. U.S. Army, Army Regulation 190-12, 4-7f.(2).

  29. Christof Heyns, “Autonomous weapons systems and human rights law.” In 2014 Meeting of Experts on LAWS at the United Nations Office in Geneva, 13-16 May (Geneva: United Nations, 2014), 3, 13-14.

  30. Michael J. Kranzler, “Don’t Let Slip the Dogs of War: An Argument for Reclassifying Military Working Dogs as ‘Canine Members of the Armed Forces’,” University of Miami National Security & Armed Conflict Law Review 4 (2013): 293.

  31. Linda Crippen, “Military Working Dogs: Guardians of the Night,” United States Army News, May 23, 2011, http://www.army.mil/article/56965/Military_Working_Dogs__Guardians_of_the_Night/

  32. Catherine A. Theohary et al, FY2013 National Defense Authorization Act: Selected Military Personnel Policy Issues (Washington, DC: Congressional Research Service, 2013), 7.

  33. In World War II, ‘Chips’ was decorated with the Purple Heart and the Silver Star after storming a machine gun nest and forcing the surrender of fourteen Italians. Janet M. Alger and Steven F. Alger, “Canine Soldiers, Mascots, and Stray Dogs in U.S. War: Ethical Considerations,” in Animals and War: Studies of Europe and North America, ed. Ryan Hediger, 83 (Leiden: Brill, 2013).

  34. Kranzler, “Don’t Let Slip the Dogs of War,” 271-273, 291.

  35. Theohary et al, FY2013 National Defense Authorization Act, 7. See also, David H. Lee, ed., Operational Law Handbook 2015, 12-V.B.2.c

  36. James Dao, “After Duty, Dogs Suffer Like Soldiers,” New York Times, December 01, 2011, http://www.nytimes.com/2011/12/02/us/more-military-dogs-show-signs-of-combat-stress.html?pagewanted=all&_r=0 (accessed September 29, 2015).

  37. Kranzler, “Don’t Let Slip the Dogs of War,” 288.

  38. Ibid, 271-273, 291.

  39. George R. Mastroianni, “Looking Back: Understanding Abu Ghraib,” Parameters 43, no. 2 (2013): 57, 60.

  40. Poorly designed policy that poorly trained staff did not fully understand was one of the contributing factors to the horrors of Abu Ghraib. See Douglas A. Pryer, “The Fight for the High Ground: The U.S. Army and Interrogation During Operation Iraqi Freedom, May 2003 – April 2004” (master’s thesis, U.S. Army Command and General Staff College, Fort Leavenworth, KS, 2009), 89-95, 109.

  41. Christopher T. Fredrikson, Wendy D. Daknis, and James L. Varley, “Annual Review of Developments in Instructions,” The Army Lawyer 41, no. 5 (2011): 26.

  42. 68 M.J. 316 (C.A.A.F. 2010). Available at http://www.law.yale.edu/U.S._v._Smith.pdf (accessed September 29, 2015).

  43. U.S. Department of Defense, Directive Number 5200.31E – DoD Military Working Dog (MWD) Program (Washington, DC: U.S. Department of Defense, 2011), 4a-b. Italics added.

  44. “No dog shall be used as part of an interrogation approach or to harass, intimidate, threaten, or coerce a detainee for interrogation purposes.” U.S. Department of Defense, Directive Number 3115.09 – DoD Intelligence Interrogations, Detainee Debriefings, and Tactical Questioning (Washington, DC: U.S. Department of Defense, 2013), Enclosure 4-15.

  45. Curiously, Deputy Assistant Secretary of the Navy (Unmanned Systems) Frank Kelley has compared the trust between the between Military Working Dog Teams with what will be required of “human-machine team[s].” Kelley, “Realizing the Robotic,” 14-17.

  46. Heyns, “Death by Algorithm.”

  47. “Swarms of unmanned aircraft may be used to quickly provide unprecedented amounts of surveillance data on a particular problem, to provide wide-area internet or telecoms access, or to overwhelm even modern air defence systems (if only due to the fact that such systems have a finite number of rounds).” UK Ministry of Defence, Joint Doctrine Note 2/11, 3-10, 6-8–6-9.

  48. “CODE [Collaborative Operations in Denied Environments] intends to focus in particular on developing and demonstrating improvements in collaborative autonomy: the capability for groups of UAS to work together under a single human commander’s supervision. … CODE’s envisioned improvements to collaborative autonomy would help transform UAS operations from requiring multiple people to operate each UAS to having one person who is able to command and control six or more unmanned vehicles simultaneously. Commanders could mix and match different systems with specific capabilities that suit individual missions instead of depending on a single UAS that integrates all needed capabilities but whose loss would be potentially catastrophic. This flexibility could significantly increase the mission- and cost-effectiveness of legacy assets as well as reduce the development times and costs of future systems.” Defense Advanced Research Projects Agency, “Establishing the CODE for Unmanned Aircraft to Fly as Collaborative Teams,” U.S. Department of Defense, January 21, 2015, http://www.darpa.mil/news-events/2015-01-21 (accessed August 06, 2016).

  49. Danièle Bourcier, “Artificial intelligence & autonomous decisions: From judgelike robot to soldier robot,” in 2016 Meeting of Experts on LAWS at the United Nations Office in Geneva, 11-15 April (Geneva: United Nations, 2016), 28.

  50. Pablo Kalmanovitz, “LAWS and the Risks of IHL extension,” in 2016 Meeting of Experts on LAWS at the United Nations Office in Geneva, 11-15 April (Geneva: United Nations, 2016), 6.

  51. Which is why, as Steve Goose has said, definitions are “inevitably the last thing agreed to in negotiations on a weapon-related, legally-binding international instrument.” Steve Goose, “Statement by Human Rights Watch to the Convention on Conventional Weapons Experts Meeting on Lethal Autonomous Weapons Systems, General Exchange of Views, Geneva,” Human Rights Watch, April 13, 2015. https://www.hrw.org/news/2015/04/13/statement-human-rights-watch-convention-conventional-weapons-experts-meeting-lethal (accessed October 09, 2015).

  52. Simon Parkin, “Killer Robots: The Soldiers that Never Sleep,” BBC, July 16, 2015, http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep (accessed August 25, 2015). See also, Tara Cleary, “South Korean ‘Super Gun’ Packs Hi-Tech Power,” Reuters, February 14, 2011, http://www.reuters.com/video/2011/02/14/south-korean-super-gun-packs-hi-tech-kil?videoId=187406842 (accessed August 25, 2015).

  53. Michael W. Meier, “U.S. Delegation Closing Statement,” U.S. Mission to Geneva, April 17, 2015, https://geneva.usmission.gov/2015/05/08/ccw-laws-meeting-u-s-closing-statement-and-the-way-ahead/ (accessed August 07, 2016).

  54. Christof Heyns, “Death by Algorithm.”

  55. Sparrow, “Can Machines Be People?” in Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney and George A. Bekey (eds.), 301-316. (Cambridge, MA: MIT Press, 2012).