Chapter 5

Chapter 5

Legal issues

Introduction

5.1        This chapter will consider several of the legal issues raised regarding ADF use of unmanned platforms. These included:

Law of armed conflict, international humanitarian law and human rights

5.2        Contributors to the inquiry agreed that any use of unmanned platforms by the ADF must comply with Australia's international law obligations, including the LOAC IHL, and IHRL. These obligations arise from customary international law as well as the treaty commitments made by the Australian Government. In particular, appropriately abiding by the principles applicable to the use of force in armed conflict (distinction, proportionality and precaution) was emphasised. For example, the Programme on the Regulation of Emerging Military Technology (PREMT) commented that the fact that 'a platform is controlled by a remote operator rather than an on‐board pilot does not reduce the applicability of existing international or domestic law to the operations of the ADF':[1]

The key IHL rules pertaining to the conduct of hostilities require attacks to be directed only against military personnel and objects (principle of distinction), prohibit launching attacks against legitimate targets where incidental damage to civilians would be disproportionate to the military advantage anticipated (principle of proportionality), and demand that constant care be taken to spare the civilian population (principle of precaution).[2]

5.3        A number of legal issues were raised regarding the use of unmanned platforms, particular if they were armed. Frequently, these issues were in the context of the use of armed unmanned platforms as part of operations conducted by other countries. For example, Professor Ben Saul noted that the UN General Assembly had urged States in countering terrorism:

To ensure that any measures taken or means employed to counter terrorism, including the use of remotely piloted aircraft, comply with their obligations under international law, including the Charter of the United Nations, human rights law and international humanitarian law, in particular the principles of distinction and proportionality[3]

5.4        The Human Rights Law Centre (HRLC) considered that 'weaponised drones open up a Pandora's box of legal and ethical considerations'. It noted that while the use of drones is not per se illegal under international law 'the use of drones is subject to the rule of law, in particular [IHL] and [IHRL]':

Both of these systems of law require the protection of human life. The right to life requires that lethal force only be used where strictly necessary and proportionate, whilst the humanitarian law strives to protect the life of civilians in armed conflict.[4]

5.5        The HRLC drew the committee's attention to two significant studies undertaken by UN officials to develop state legal and accountability standards in relation to 'drones'. These were conducted by the UN Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Mr Ben Emmerson, and the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Mr Christof Heyns. The HRLC stated:

In short, these two experts have explained that:

  1. all States need to comply with international law when using, or involved in the use of, drones;
  2. the targeted killing of individuals by drones will be lawful only in very limited circumstances;
  3. use or involvement with drones should be transparent so that there is accountability to the people and international community;
  4. where there have been, or appear to have been, civilian casualties that were not anticipated when a drone attack was planned, a prompt, independent and impartial fact-finding inquiry should be conducted and a public and detailed explanation of the results provided; and
  5. victims of a violation of international law caused by drones should be provided with an effective remedy.[5]

5.6        One of the HRLC's key recommendations was that the 'Australian government not procure armed drones unless it has a system of transparency and accountability for their use that is consistent with Australia's legal obligations, including under international human rights law and international humanitarian law'.[6]

5.7        However, others did not consider the expanded use of unmanned platforms (including armed UAVs) would have significant legal ramifications for Australia. Some argued the concerns raised by 'drone strikes' conducted by other countries were not applicable to the use of unmanned platforms by the ADF due to differing legal regimes for the use of force. For example, the Intelligence Services Act 2001 prohibits the Australian Security Intelligence Service from paramilitary activities, violence against the person or the use of weapons.[7] Dr Andrew Davies from ASPI observed that 'the use of armed drones by [Australian] civilian agencies would be a dramatic departure from current practice requiring legislative change'.[8]

5.8        Others considered that existing legal frameworks were sufficiently applicable to unmanned platforms. For example, Northrop Grumman thought that unmanned platforms would bring few new international legal considerations 'into play'. It commented that 'most defence and "warlike" capabilities are already governed by the laws of war and the conventions of conflict, such as: just cause; proportionality of response; minimisation of collateral damage; avoidance of civilian casualties'.[9] Similarly, Dr Ian Henderson argued the 'resort to the use of force and the regulation of particular instances of the use of force is comprehensively addressed in international law'. He cautioned:

Great care should be taken before identifying limitations or restrictions on the employment and use of unmanned systems. This is because any such limitation or restriction would prima facie apply equally to manned systems as there is no legally significant difference between the two.[10]

5.9        Professor Tim McCormack from PREMT at Melbourne Law School emphasised that, while the applicable legal frameworks will depend on the operational context, 'the law is adequate and capable of regulating the dramatically increasing use of [unmanned platforms]'.[11] He stated:

The existing law is adequate to deal with the existing ADF assets which are used not only for intelligence gathering, surveillance and reconnaissance but also for remotely piloted, armed unmanned vehicle systems. It just has to be applied...[W]e have a track record of compliance with the law. It is something we should be really proud about and eternally vigilant to ensure it continues to be the case.[12]

5.10      An argument was also made that unmanned systems might potentially facilitate greater compliance with legal requirements in armed conflict. For example, Dr Henderson argued 'the greater intelligence, surveillance and reconnaissance persistence that can be provided by current unmanned systems can facilitate better target discrimination and lead to less incidental injury to civilians and damage to civilian property'. He also argued that remote operators of unmanned platforms 'may be less likely (when compared to operators who are personally at risk) "to resort to greater force to address threats"'.[13] Along the same lines, the Defence submission argued that the 'heightened level of situational awareness of the environment and threat warning, with the ability to further discriminate between combatants, non-combatants and friendly forces' provided by unmanned platforms 'promotes adherence to the Law of Armed Conflict (LOAC)'.[14]

5.11      Defence stated that the ADF's use of unmanned platforms and systems satisfies domestic and international legal obligations, in particular, under the Geneva Conventions and the Law of Armed Conflict (LOAC). It emphasised:

Unmanned ADF air, maritime and land platforms and their control systems, are subject to, and employed under the same legal framework as manned ADF platforms. Specifically, these platforms and associated systems are subject to the same legal considerations and constraints under LOAC as manned ADF platforms.[15]

5.12      Further, Rear Admiral Peter Quinn stated that the use of any unmanned platform in the application of force would be subject to the same robust targeting procedures applicable to manned platforms.[16]

Shared intelligence

5.13      The Centre for Military and Security Law (CMSL) highlighted that multinational military operations, such as those conducted in Afghanistan and Iraq, often involved the sharing and pooling of intelligence with operational partners. This could mean that intelligence generated by an ADF unmanned platform may 'then be used by an operational partner to, for example, facilitate a specific targeting operation'. It stated:

This possibility raises two specifically legal issues for Australia and the ADF: (1) the international law issue of Australia's state responsibility for the outcome perpetrated by the operational partner using ADF [unmanned platform] generated intelligence as a component or enabler of that operation; and (2) the Australian domestic criminal law issue of individual criminal responsibility of ADF personnel for aiding or abetting that outcome (complicity).[17]

5.14      The CMSL, after considering potential scenarios in relation to shared intelligence, recommended 'the development of a clear policy establishing the parameters for ADF [unmanned platform] operations which contribute intelligence to a shared operational pool...'.[18]

Review

5.15      The Australian Red Cross urged that unmanned platforms only be deployed if respect for international humanitarian law (IHL) can be guaranteed. It made several recommendations including that 'unmanned platform systems as either weapons, means, or method of warfare must be thoroughly tested to ensure that they are capable of complying with IHL at all times'.[19] At the April hearing, Rear Admiral Quinn stated that ADF air, maritime and land unmanned platforms are subject to the same legal considerations and constraints under the LOAC as manned ADF platforms. This included review to determine whether the employment of the unmanned platforms would, in some or all circumstances, be prohibited or restricted by international law or any other rule of international law applicable to Australia.[20]

Training

5.16      The Australian Red Cross recommended that 'unmanned platform systems should not be used, controlled, programmed or operated by individuals who are not fully conversant with and understand the principles of IHL'[21] Further, Dr Phoebe Wynn-Pope observed:

[T]he Australian government has provided support to Australian Red Cross for the purposes of providing dissemination of IHL to the Australian population since the ratification of additional protocol 1 and its enactment into domestic legislation in 1991...However, further outreach would be required to an entirely new sector if unmanned or semiautonomous weapons were to be used during armed conflict. Those involved may not be apparent or easily identifiable. The Australian government may like to carefully consider whether the current dissemination program offered through Australian Red Cross and the training provided by the Australian Defence Force to their own personnel would be adequate to discharge the government's responsibilities with respect to this dissemination.[22]

5.17      At the April hearing, Air Commodore Chris Hanna told the committee all ADF personnel would receive training on the basic laws of armed conflict and international humanitarian law. If ADF personnel were to be deployed overseas, this would be supplemented by predeployment training and specific training on the rules of engagement which would take into account the LOAC and IHL.[23]

Civilian operation of unmanned platforms

5.18      Civilian operation of military unmanned platforms was an area of policy where there were conflicting views expressed during the inquiry. Dr Davies from ASPI observed that the ADF had already accepted civilian contractors, even in front-end support roles and considered that 'the ADF could not do what it does if it were not for civilian contract support'. However he distinguished between a civilian supporting an unmanned platform and 'commanding it and controlling it'.[24] He stated:

If they are demonstrably in support of military operations and there is a military chain of command responsible for the targeting decisions, I suspect that there is not a problem if there are civilians actually flying the drones or dealing with some of the intelligence feeds that come from them. That would be my anticipation—that it is how they are used, not the workforce that employs them...[25]

5.19      However, the CMSL identified civilian involvement in the operation of unmanned platforms as a 'potential area of concern'. It stated:

Civilian involvement in warfare is not prohibited under international law; however, those who directly participate in hostilities will be deprived of the legal protection that is accorded to them as civilians and can lawfully be targeted. This is regardless of whether the civilian is operating as a member of a government organisation such as a civilian intelligence organisation or as a civilian contractor.[26]

5.20      Dr Rain Liivoja from the PREMT explained there was no established test for when a civilian was considered to be taking part in hostilities:

There is still a grey area between the clear situation where a civilian is directly participating in hostilities—say, for instance, launching a Hellfire missile from an unmanned system—to the point where it is a civilian who, say, in Australia is providing basic maintenance for an unmanned platform. The test is unclear, but there are circumstances where civilians have been used as drone operators and where they have clearly crossed the line into direct participation in hostilities.[27]

5.21      The CMSL recommended the ADF, in cooperation with other relevant government departments and agencies, develop comprehensive guidelines on civilian engagement in the operation of unmanned platforms for military purposes.[28] Similarly, PREMT commented:

[C]are should be taken when assigning civilians – for example, contractors or civilian staff members of government agencies – to operate [unmanned platforms] in armed conflict. While international law does not prohibit such practice, the operators run the risk of taking a direct part in hostilities, which makes them legitimate military targets and renders them liable to arrest and prosecution if, for example, travelling abroad after the end of the conflict. Also, facilities from which UVs are operated may become targetable as lawful military objectives.[29]

5.22      Similarly, the Australian Red Cross also recommended that 'unmanned platform systems should not be used, controlled, programmed or operated by individuals whose accountability lies outside military mechanisms of control in relation to potential breaches of IHL'.[30]

Autonomous weapons systems and unmanned platforms

5.23      Many military, civilian and recreational unmanned platforms currently available have a degree of automated functionality designed to reduce operator workload or errors. These functionalities could include automated take-off and landing, height keeping, and route following/planning. However, Mr Ken Crowe from Northrop Grumman observed that in terms of true autonomy 'unmanned systems still have a long way to go':

The aircraft, the ground systems and the underwater systems follow various pre-programmed rules either to repatriate themselves to an area of safety and land or to avoid impacting adversely on their environment. So true autonomy I do not think has arrived in unmanned systems, but they exhibit elements of autonomy. To the untrained observer it may look as if the systems are thinking for themselves, but of course they are not. There are acting under pre-programmed rules and they are following the direction of their pilots or mission commanders back at base.[31]

5.24      Fully autonomous unmanned platforms capable of using lethal force do not currently exist. However, so-called autonomous weapons systems (AWS) are being developed. For example, there are a number of air defence systems which have human supervised autonomous modes which detect, track and guide weapons to destroy targets such as the Israeli Iron Dome system or the Aegis Combat System which will be operated on the RAN Air Warfare Destroyers. Active protection systems are also being deployed on armoured vehicles which can autonomously detect and intercept incoming munitions.[32]

5.25      Recent research and development has included a focus on increasing the level of autonomy of unmanned platforms for both civilian and military applications. The International Committee of the Red Cross (ICRC) noted that 'a truly autonomous system capable of operating in a dynamic environment against a range of targets has not yet been developed...[h]owever, there is considerable interest in (and funding of) relevant research'.[33] For example, the US Defense Advanced Research Projects Agency (DARPA) Collaborative Operations in Denied Environment (CODE) project aims at developing improvements in collaborative autonomy of unmanned platforms, including the capability for groups of UAVs to work together under limited human supervision. The program manager for the CODE project stated:

Just as wolves hunt in coordinated packs with minimal communication, multiple CODE-enabled unmanned aircraft would collaborate to find, track, identify and engage targets, all under the command of a single human mission supervisor.[34]

5.26      Concerns were raised regarding the legal implications of unmanned platforms capable of autonomously using lethal force. There were doubts that an AWS would be capable of adequately complying with the fundamental principles of IHL such as proportionality. Uncertainties were also highlighted in relation to the accountability for acts performed by AWS which amounted to violations of IHL including individual criminal responsibility or State responsibility.

5.27      Dr Brendan Gogarty from the University of Tasmania urged the committee to consider the issue of full autonomy of unmanned platforms as a long term concern requiring 'immediate and wide ranging action':

A computer without human restraints will always be faster than one with some form of human control and therefore, realistically, once one nation has fully autonomous weaponised [unmanned platforms] the others will follow. That situation may be fifty years away, or it may be five, but ultimately, now is the best time to have the debate about whether the community is willing to accept such a future. If it is determined that full weapons autonomy is not an acceptable path then Australia will have to participate in, or even lead, international dialogue towards effective regulation and restriction of such technology.[35]

5.28      The moral and ethical issues regarding the use of AWS were also raised with the committee. The ICRC commented:

Even if technology could one day allow an autonomous weapon system to be fully compliant with IHL in a dynamic environment, there remain some fundamental questions...Additional Protocol I of the Geneva Conventions provides that the acceptability of such systems should be examined according to the principles of humanity and the dictates of public conscience.

Would the dictates of public conscience be prepared to yield to a machine the decision to take human life on a battlefield? And if it is agreed that some human control or oversight is required in such life and death situations, what kind and degree of human control would be meaningful?[36]

5.29      Dr Christian Enemark identified the critical issue as 'whether or how technology can overcome ethical shortcomings in the use of force while preserving the moral influence of human responsibility'. He considered that 'there is little scope for optimism that robotics engineers could program autonomous drones to exercise better ethical judgement than on-board pilots or ground-based operators, and a more serious concern is that these machines might be deployed before achieving even a roughly equal standard'.[37]

5.30      On 27 February 2014, the European Parliament adopted a non-binding resolution on the use of armed drones that included support for a ban on 'the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention'.[38] Several human rights and other civil society groups have also commenced a campaign for international action against the development of AWS. The Campaign to Stop Killer Robots has called for a comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons achieved through an international treaty, as well as through national laws and other measures. It has also urged all countries to consider and publicly elaborate their policy on fully autonomous weapons.[39] A range of other possible measures have been suggested to regulate AWS, including controls to slow the proliferation, requirements they be defensive in nature, limitations on their firepower or compulsory neutralising mechanisms.[40]

5.31      The Australian Red Cross outlined that State Parties to the Convention on Certain Conventional Weapons (CCW) have convened a number of meetings to discuss issues surrounding AWS together with observer States, UN agencies, the ICRC, NGOs and subject matter experts.[41] Australia, as a signatory to the CCW, has participated in these discussions. At the first informal meeting of experts, the Australian representative, former Ambassador Mr Peter Woolcott stated:

For us, this topic has raised many more questions than answers. Consistent with Australia's approach to other emerging technologies, like in the cyber context, Lethal Autonomous Weapon Systems, if they are to be used, should only be used in accordance with existing international law. How international law, including the use of force, international humanitarian law and international human rights law, applies to Lethal Autonomous Weapon Systems will need to be addressed as the technology continues to develop...

Like any other weapon, Australia notes that a Lethal Autonomous Weapon System might be employed in a defensive mode or an offensive mode. As such, Australia would like to eventually see a definition of a Lethal Autonomous Weapon System which identifies its key distinguishing aspects to enable further discussion on this topic.[42]

5.32      Australia did not make a statement at the next meeting of experts held in Geneva on 13-17 April 2015. Many countries and organisations participating at that meeting identified the concept of 'meaningful human control' as important to potential future regulation of AWS.[43] However, others countries urged caution, highlighted definitional issues and argued that it was premature to consider specific action to regulate AWS. For example, the UK stated:

To legislate now, without a clear understanding of the potential opportunities as well as the dangers of a technology that we cannot fully appreciate, would risk leading to the use of generalised and unclear language which would be counterproductive. IHL has successfully accommodated previous evolutions of military technology...There is no reason to believe that IHL will not be capable of dealing with an evolution in automation.[44]

5.33      In 2012, the US Department of Defence (US DoD) issued a policy statement on autonomy in weapons systems. The directive appeared to be the first policy statement by any country on AWS. In particular, the directive states it is US DoD policy that '[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force'.[45] The US delegation to the informal meeting of experts on AWS in April 2015 described the framework established by the directive:

The framework establishes a deliberative approval process by senior officials, sets out the technical criteria that would need to be satisfied in order to develop autonomous weapon systems, and then assigns responsibility within our Defense Department for overseeing the development of autonomous weapons systems. The Directive imposes additional requirements beyond what is normally required during our weapons acquisition process. These additional requirements are designed to minimize the probability and consequences of failure in autonomous and semi-autonomous weapons systems that could lead to unintended engagements and ensure appropriate levels of human judgment over the use of force.[46]

5.34      Defence stated that its approach was that 'where lethal force is involved a trained operator will remain responsible for the application of that force'.[47] It noted:

It is theoretically possible that an unmanned system with sufficient processing power and a library of threat signatures could be armed and programmed to apply lethal force autonomously. The ADF will embrace semi-autonomous systems where that capacity can save lives or reduce exposure ‑ for example by replacing truck drivers in some vehicles of a resupply convoy with autonomous systems that can follow the vehicle ahead – but where lethal force is involved a trained operator will remain responsible for the application of that force.[48]

5.35      At the April hearing, Rear Admiral Peter Quinn stated:

Australian unmanned systems retain a human in the loop, meaning that, while some basic functions are conducted autonomously, ultimate control is retained by a system operator. This will remain the case if in the future the ADF asks the government to consider the benefits of arming unmanned systems.[49]

Other legal and regulatory issues

5.36      A broad range other legal issues were raised in relation to the use of unmanned platforms. The majority of these issues were also applicable to civilian or government use of unmanned platforms. These included negligent use, traffic regulation considerations, use of evidence gathered, privacy regulations and regulation of no-fly zones.[50]

Navigation: Previous Page | Contents | Next Page