Chapter 5 - Artificial Intelligence and Autonomous Weapons related issues

  1. Artificial intelligence and autonomous weapons related issues

Overview

5.1Artificial Intelligence (AI) and Autonomous Weapon Systems (AWS) have an ever-increasing role within militaries worldwide. From systems capable of predictive analysis, to platforms that can prosecute targets without human intervention, careful consideration is warranted as Defence (comprised of the Department of Defence and the Australian Defence Force (ADF)) continues to evolve its capabilities by integrating emerging technologies.

5.2The Subcommittee was interested in examining the role of AI and AWS within the ADF and the requisite policy settings that need to account for and address relevant moral, legal, ethical, and regulatory matters within Australia and internationally.

5.3As defined by the Macquarie Dictionary, AI is:

1the ability of a computer or other device or application to function as if possessing human intelligence.

2The branch of computer science which deals with the design and use of machines that have this ability.

5.4The use of AI in the Defence context is varied, with applications ranging from automating office processes through to analysing images.[1]

5.5There is no internationally agreed definition of AWS. This chapter employs the meaning of AWS in the submission from the Australian Human Rights Commission (AHRC), in which AWS are broadly understood using the following definitions:

[Where] [a]utomation refers to systems which perform tasks that ordinarily involve human input…

[and] [a] weapon system is ‘[a] combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment (if applicable) required for self-sufficiency.’[2]

5.6The Subommittee understands that, generally, AWS require a human to be ‘in the loop’ or ‘on the loop’ to make a decision to inflict harm.[3] However, most of the evidence the Subcommittee received during the inquiry specifically discussed Lethal Automated Weapon Systems (LAWS) which are generally understood as ‘weapons that independently select and attack targets’.[4]

5.7Using the above definitions, this chapter looks at the evidence received on AI and AWS policy regulation, employment and legal considerations. This chapter will discuss AWS and specifically LAWS.

Asymmetric advantage

5.8The National Defence: Defence Strategic Review 2023 (2023 DSR) emphasised the importance for Defence to transition from a balanced force to a focused force, capable of utilising advanced and emerging technologies to achieve asymmetric advantage against its adversaries.[5] An essential element of this transition is AUKUS Pillar II (Advanced Capabilities) and the development of ‘advanced capabilities in areas such as [AI], hypersonics and maritime domain awareness’.[6]

5.9In response to the 2023 DSR recommendations, the Australian Government agreed that

… new technology and asymmetric advantage is a priority and will ensure the ADF has the capacity to engage in impactful projection across the full spectrum of proportionate response.[7]

5.10Defence advised the Subcommittee that its priority is to capitalise on AI and autonomy in support of logistics, intelligence, surveillance and reconnaissance, targeting, tracking and strike applications.[8]

Australian Government policy and regulation

5.11Domestically, the Department of Industry, Science and Resources has developed and is responsible for Australia’s voluntary AI Ethics Principles which have been developed to ‘ensure AI is safe, secure and reliable’. The principles include:

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.[9]
    1. Internationally, Australia contributes to the AI policy development discussion through various organisations such as the Global Partnership on AI, currently hosted by the Organisation for Economic Cooperation and Development.[10] Further, Australia contributed to the Bletchley declaration[11] from the AI Safety Summit in 2023 and the United Nations Group of Governmental Experts reviewing Lethal Autonomous Weapons.[12]
    2. RAND Australia argued that while these policies and frameworks continue to be developed, these technologies will ‘advance at lightning pace, but in the absence of effective international regulation or united policy response’.[13]
    3. In the meantime, Defence has declared a commitment to the responsible use of AI, ‘with careful consideration of the opportunities and risks, consistent with our international legal obligations’.[14] Defence advised that when developing its own responsible AI policy, it is informed by multiple sources including the Australian AI Ethics Framework, the Australian Signals Directorate (ASD) Ethical AI in ASD policy and policies of likeminded partners including the United Kingdom (UK) and United States of America (US).[15] Furthermore, Australia has also considered the US-led Political Declaration on Responsible Military Use of AI and Autonomy.[16]
    4. In its submission, RAND outlined that the Australian Army’s Robotics and Autonomous Systems framework provides four levels of autonomy within the spectrum of autonomous systems. These four levels are:
  • ‘remote operated’;
  • ‘automatic’ (where a human remains in the loop);
  • ‘autonomic’ (where a human supervises or tasks a machine remaining within the decision loop); and
  • autonomous (a human starts the decision loop with the system then acting independently).[17]
    1. RAND stated that similar definitions have been adopted by other countries such as the US military, China and the UK however each ‘differs to some legally significant degree’.[18]
    2. However, the Law and Future of War Research Group submission was critical of Defence’s management of it AI, stating that ‘from a policy perspective [it] has been confused and convoluted’.[19] Further, it stated that greater transparency is needed around who is responsible for the policy and a need for greater public consultation with increasing utilisation of AI within the military.[20] The group also proposed that Defence could take a more proactive stance in conjunction with Australia’s allies when it comes to regulation of autonomy and AWS more generally.[21]

Employing Artificial Intelligence and Autonomous Weapons Systems

5.18As noted above, the Defence submission highlighted its desire to capitalise on AI and AWS in support of ‘logistics, intelligence, surveillance and reconnaissance, targeting, tracking and strike applications’.[22]

5.19The Australian Strategic Policy Institute (ASPI) suggested that AI could ‘replace the human management of mundane tasks’, freeing up human capability to where it is ‘ethically required or operationally advantageous or necessary’.[23]

5.20ASPI stated that seeking to automate as many tasks as possible would allow Defence to optimise its available trained human capital, with the added benefit that teaming will deliver a ‘force multiplier effect’.[24] Dr Malcolm Davis, Senior Analyst, Defence Strategy and Capability from ASPI explained that:

… the effective application of AI could support militarily relevant human machine teaming, as the second key area, that could accelerate and enhance ADF military effectiveness in areas as diverse as logistics and supply, as well as the more traditional areas such as command and control and the delivery of effects.[25]

5.21In its submission, ASPI also argued that further utilisation of AI and autonomous systems would enable personnel to ‘operate at greater distances from risk in a vast suite of scenarios’, such as medical evacuation and combat.[26]

5.22ASPI highlighted the importance of AUKUS Pilar II to the development of AWS and AI. Dr Davis advised the Subcommittee that, coupled with the Advanced Strategic Capabilities Accelerator, there is an opportunity to ‘deliver AI and autonomous systems to the ADF in a way that is consistent with ethical requirements’.[27]

5.23RAND noted the ADF’s commitment to ensuring AWS is only ever employed in a ‘manner that is ethical and compliant with Australia’s obligations under international humanitarian law’ (IHL).[28] RAND emphasised that fundamentally:

It is easy to make the argument that such commitment places the ADF at a disadvantage compared to less scrupulous strategic threat actors, but it is important that Defence leadership continues to disregard such an argument.[29]

5.24Aside from Australia’s obligation to ensure that weapon use is consistent with its international law obligations, RAND reiterated that the ADF has an obligation to Australian soldiers, sailors and aircrew to prevent their exposure to ‘the kind of moral injury that would result from a co-deployed AWS engaging an illegitimate target’.[30] RAND reflected:

… this position does present Australia with a wicked problem: how should the ADF secure a competitive advantage whilst ensuring that its use of AWS, and other military applications of AI, remains ethical and legal?[31]

5.25The AHRC was concerned about the potential impact of what it identified as an ‘algorithmic bias’.[32] It believes this issue may arise when AI ‘produces outputs that result in unfairness or discrimination’, potentially risking unlawful discrimination when using AI capabilities in civilian contexts.[33]

5.26ASPI stated that to avoid irresponsible and dangerous weapons development, Defence’s aim should be to:

Develop AI-enabled autonomous systems that exploit the systemic human advantage that our highly trained, trusted and experienced military personnel offer by keeping humans ‘in’ or ‘on’ the loop for lethal capabilities. This is especially critical for command and control systems at both the headquarters and tactical level. Keeping humans ‘in’ or ‘on’ the loop can provide tailored levels of human control—neither wholly relegates operational decision-making or the firing of weapons to a machine.[34]

5.27ASPI’s standpoint appears consistent with the AHRC’s statement that:

Where military technology is not capable of functioning with an informed human in the loop, or for practical reasons particular lethal military technology is designed to be free of human interaction or oversight, it should be prohibited.[35]

5.28The AHRC emphasised that it is not just a human that is required in the loop, but an informed human:

Where AI-technologies, such as LAWS, are making critical decisions or recommendations – there must be an informed human in the loop (as distinct from existing literature which only refers to a ‘human in the loop’) to actively scrutinise outcomes. These human overseers must also be aware of any unconscious biases e.g. (automation bias).[36]

5.29RAND maintained the position that a human be in, or on, the loop but argued additional system risks need to be considered. It raised the issue of ‘automation bias’ as well:

Humans have a tendency to overly trust in technology once it diffuses and matures; however this automation bias can lead trained and experienced humans to trust in the technology rather than their own judgement, amplifying the risk of unanticipated consequences, for example via malicious actor interference or training data bias.[37]

5.30RAND also discussed the importance of an effective Test, Evaluation, Verification and Validation (TEV&V) model to maintaining meaningful human control of AI.[38] RAND stated that the process can become complex as AI enabled systems can:

… act in undesirable or unpredicted ways due to the complex, and often opaque, interactions between system elements, behaviour and performance.[39]

5.31Dr Lauren Sanders, who gave evidence in a private capacity, reinforced this point and stated:

If we are nesting capabilities in our targeting cycle, those capabilities all need to have independent assurance that they can meet the standards of the requisite level before we rely on them. That also requires commanders and decision-makers to be appropriately trained to understand the limitations of those capabilities. A lot of these issues are mitigated in the same way as other new technology risks are mitigated when introducing new military capabilities. It is a question of assurance testing, understanding limitations and not overly relying on machines in circumstances they are not designed to be used for.[40]

Lethal Autonomous Weapon Systems and International Humanitarian Law

5.32As stated above, the AHRC submission, while pointing out that there is no internationally agreed definition, submitted that LAWS could be understood as weapons that ‘independently select and attack targets’,[41] without a human ‘in the loop’ or ‘on the loop’, as distinct from AWS.

5.33The AHRC expressed that ‘this removing, or reducing, human overseers in the deployment and operation of AI and autonomous systems is a key concern about the use of LAWS’.[42]

5.34However, the AHCR also noted that: ‘[M]ost LAWS, in their current form, are not truly fully autonomous weapons systems – as there is usually ‘some form of human intervention, even if only to activate it’.[43]

5.35The AHRC emphasised its position that while it recognises there are significant arguments in favour of LAWS, ‘it is unlikely the technology can ever comply with international humanitarian law’.[44]

5.36The AHRC also commented that Australia has previously appeared reluctant to regulate LAWS due to autonomous technologies having ‘distinct benefits for the promotion of humanitarian outcomes and avoidance of casualties.’[45]

5.37RAND submitted that there is a growing sense of concern among the international community around how to ‘integrate [LAWS]… into international humanitarian law’, a discussion in which Australia has been an ‘active participant.’[46]

5.38RAND was emphatic in its evidence to the Subcommittee that:

Among the few principles that is almost universally agreed to amongst the international community in relation to AWS (and military applications of AI more broadly) is that the decision to end human life should remain under ‘Meaningful Human Control.’[47]

5.39RAND pointed to the passage of the UN General Assembly Resolution 78/241 in December 2023, which it says:

…acknowledged “serious concerns” stemming from LAWS and requested that the Secretary General seek the views of member states toward a report (due September 2024) on how to address ethical, legal and humanitarian risks stemming from LAWS. Although a strong signal in moving the issue to the agenda of the General Assembly, this resolution still simply calls for more discussion and review. As at the time of writing, therefore, we remain far from specific international humanitarian law prohibitions on any military application of AI beyond those that Australia is already obligated to follow.[48]

5.40The International Committee of the Red Cross (ICRC) states that international humanitarian law is:

…a set of rules which seek, for humanitarian reasons, to limit the effects of armed conflict. It protects persons who are not or are no longer participating in the hostilities and restricts the means and methods of warfare. International humanitarian law is also known as the law of war or the law of armed conflict…

[It] prohibits all means and methods of warfare which: fail to discriminate between those taking part in the fighting and those, such as civilians, who are not, the purpose being to protect the civilian population, individual civilians and civilian property; cause superfluous injury or unnecessary suffering; cause severe or long-term damage to the environment. Humanitarian law has therefore banned the use of many weapons, including exploding bullets, chemical and biological weapons, blinding laser weapons and anti-personnel mines.[49]

5.41In its submission, the AHRC posited that

Australia’s past reluctance to regulate LAWS appeared to be predicated on three key issues:

  • A lack of an internationally agreed definition of LAWS.
  • Benefits of the technology.
  • Existing review mechanisms being considered a suitable safeguard.[50]
    1. On the third point, the AHRC pointed out that ‘Australia has previously expressed the view that existing international humanitarian law frameworks are a sufficient regulatory approach’, for example Article 36 of the Protocol Additional to the Geneva Conventions of 12 August 1949 relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977 (Article 36).[51]
    2. The Subcommittee received evidence of differing views on whether Article 36 adequately governs AI and AWS.
    3. Defence submitted that AWS have ‘the potential to save lives by minimising causalities and reducing risk to Defence personnel, but must be used in accordance with international law’.[52] Defence explained that:

Existing international law regulates the development, acquisition, deployment and use of new and emerging technologies, including autonomous weapon systems.[53]

5.45Defence further reiterated that:

All ADF weapons, means and methods of warfare must comply with Australia’s international and domestic legal obligations, and are subject to legal review under Article 36 of Additional Protocol I of the Geneva Conventions. Australia upholds its Article 36 obligation by conducting legal reviews of new and materially modified weapons, means and methods of warfare prior to acquisition and operational use to ensure they are capable of use in accordance with Australia’s international legal obligations. If a weapon system cannot be used in accordance with Australia’s legal obligations, Defence will not deploy it.[54]

5.46Additionally, Defence explained that a ‘system of controls’ approach, comprising the layering of governance architecture, policies and procedures, applies to all technologies, including autonomous weapons and AI.[55] Using this ‘system of controls’ approach ‘enables effective management of risks in using capabilities at the operational and technical levels’.[56]

5.47However, the AHRC’s view was that:

While a critical safeguard, article 36 processes lack both transparency and accountability. There is no mechanism to ensure compliance should a state fail to conduct an art 36 review. Additionally, the process is also predicated on good faith reviews, as states are also not obliged to disclose the outcome of these reviews.[57]

5.48The AHRC pointed to an abundance of research which ‘directly considers the inability of autonomous weapons systems to comply with IHL’.[58] The recent UK House of Lords inquiry by the AI in Weapons Systems Committee also heard significant evidence that further reinforces this conclusion.[59] The AHRC submitted that it is:

… arguably impossible for AI to comply with the proportionality rule, because AI is unable to understand the intrinsic value of human life, thus making it unable to undertake any weighing exercise in relation to proportionality – irrespective of any future developments in the technology.[60]

5.49The AHRC also expressed concern that it is ‘unclear where legal liability would lie when LAWS violates IHL’.[61] The AHRC listed already-identified candidates for such liability including software programmers, those who build or sell the hardware, military commanders, subordinates who deploy LAWS and/or political leaders. The AHRC concluded that:

Without an individual being held to account for the conduct of LAWS, it is unlikely that IHL sufficiently protects human rights by ensuring accountability … leading to a ‘responsibility vacuum’ providing impunity for all uses of LAWS.[62]

5.50Professor Rain Liivoja, Research Lead at the Law and Future of War Research Group at the University of Queensland provided a contrasting view, proposing that IHL is a ‘sufficient foundational basis’ for AI. He advised that:

The existing rules of international humanitarian law are in principle, capable of dealing with all kinds of new technologies that states may adopt. However, many of those rules may require renewed interpretation or a more advanced understanding of how they apply, in light of new technologies.[63]

5.51Professor Liivoja added further that:

International humanitarian law and international criminal law often look for the person who acted with intent and knowledge in relation to a particular breach of the law. That can cause complications when the system operates with a degree of autonomy. It might be that no individual person has the requisite intent for criminal responsibility…

In this circumstance, though, the state responsibility kicks in. If it is the state that has decided to utilise the system, its responsibility steps in.[64]

Committee comment

5.52Australia is facing a changing strategic environment and an elevated risk of state-on-state conflict. The Australian Government, through Defence and AUKUS Pillar II, is prioritising the development and acquisition of asymmetric capabilities such as AI and AWS. This will safeguard the Australian Government’s intent that the ADF has the capacity to engage in impactful projection across the full spectrum of proportionate responses.

5.53The Subcommittee understands that to acquire asymmetric capabilities, Defence is required to consider and assess the full spectrum of AI and AWS and their applicability to Defence capability requirements. However, Defence knows that it must continue to ensure proposed capability developments or acquisitions, and their intended use are compliant with international humanitarian law.

5.54The Subcommittee considers it inevitable that rapid technological advancements in computing and AI will continue to be integrated into current and future weapon systems and capabilities. To provide surety of system operation, functionality and output, Defence will need to develop test and evaluation frameworks that can verify and validate the systems prior to their introduction into service. Defence will also need to ensure these systems are designed to reduce automation bias and that personnel are appropriately trained to understand the limitations of the capabilities.

5.55The Subcommittee reviewed with interest the evidence on whether current international humanitarian law, in particular, Article 36 of the Protocol Additional to the Geneva Conventions of 12 August 1949 and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977 is up to the task of regulating modern automation and AI in current and future weapons systems.

5.56The Subcommittee is of the view that it is – it has been persuaded by the evidence in support of this position. Further, the Subcommittee believes that state responsibility will continue to apply in the event of an ‘individual responsibility’ vacuum.

5.57The Subcommittee acknowledges Australia’s ongoing involvement and contribution to AI policy development in international fora. This may lead to international law and policy developments in the future of which Defence must keep abreast.

5.58The Subcommittee strongly urges Defence to ensure that it continues to review and meet the requirements of Article 36 of Additional Protocol I of the Geneva Conventions and any other relevant international legal obligations when developing, modifying, acquiring and deploying Defence capabilities.

Footnotes

[1]Department of Defence (Defence), ‘Collaboration on the next generation of AI’, 21 December 2023, https://www.defence.gov.au/news-events/news/2023-12-21/collaborating-next-generation-ai, viewed 17 September 2024.

[2]Australian Human Rights Commission (AHRC), Submission 8, p. 2, drawing on the Australian Army, Robotics and Autonomous Systems Strategy v2.0, 2022, p. 5; and the National Institute of Standards and Technology Computer Science Resource Centre, ‘Weapons Systems’, https://csrc.nist.gov/glossary/term/weapons_system#:%7E:text=A%20'weapons%20system'%20is%20a,)%20required%20for%20self%2D%20sufficiency, viewed 18 September 2024.

[3]RAND Australia, Submission 1, p. 6.

[4]Australian Human Rights Commission (AHRC), Submission 8, p. 2, drawing on the International Committee of the Red Cross, ‘Autonomous Weapons Systems: Technical, Military, Legal and Humanitarian Aspects’, Expert Meeting Report, 2014, p.7.

[5]Defence, National Defence: Defence Strategic Review 2023, p. 53.

[6]Defence, National Defence: Defence Strategic Review 2023, p. 72.

[7]Defence, National Defence: Defence Strategic Review 2023, p. 106.

[8]Department of Defence (Defence), Submission 10, p. 30.

[9]Department of Industry, Science and Resources (DISR), ‘Australia’s Artificial Intelligence Ethics Framework’, https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles, viewed 19 July 2024.

[10]Organisation for Economic Co-operation and Development, ‘The Global Partnership on AI’ (GPAI), https://oecd.ai/en/gpai, viewed 19 July 2024.

[11]DISR, ‘The Bletchley Declaration by Countries Attending the AI Safety Summit 1-2 November 2023’, https://www.industry.gov.au/publications/bletchley-declaration-countries-attending-ai-safety-summit-1-2-november-2023, viewed 19 July 2024.

[12]United Nations, Office for Disarmament Affairs, ‘Convention on Certain Conventional Weapons – Group of Governmental Experts on Lethal Autonomous Weapons Systems’, https://meetings.unoda.org/ccw-/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2023, viewed 19 July 2024.

[13]RAND Australia, Submission 1, p. 6.

[14]Defence, Submission 10, p. 30.

[15]Defence, Submission 10, p. 30.

[16]US State Department, ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’, https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/#:~:text=Launched%20in%20February%202023%20at,and%20use%20of%20military%20AI., viewed 19 July 2024.

[17]RAND Australia, Submission 1, p. 6.

[18]RAND Australia, Submission 1, p. 5.

[19]Law and the Future of War Research Group, Submission 6, p. [7].

[20]Law and the Future of War Research Group, Submission 6, p. [7].

[21]Law and the Future of War Research Group, Submission 6, p. [8].

[22]Defence, Submission 10, p. 30.

[23]ASPI, Submission 12, p. [5].

[24]ASPI, Submission 12, pages. [5] and [7].

[25]Dr Malcolm Davis, Committee Hansard, Canberra, 1 March 2024, p. 32.

[26]ASPI, Submission 12, p. [5].

[27]Dr Malcolm Davis, Committee Hansard, Canberra, 1 March 2024, p. 32.

[28]RAND Australia, Submission 1, pages 6-7.

[29]RAND Australia, Submission 1, p. 6.

[30]RAND Australia, Submission 1, p. 6.

[31]RAND Australia, Submission 1, p. 6.

[32]AHRC, Submission 8, p. 6.

[33]AHRC, Submission 8, p. 6.

[34]ASPI, Submission 12, pages 6-7.

[35]AHRC, Submission 8, p. 5.

[36]AHRC, Submission 8, p. 6.

[37]RAND Australia, Submission 1, p. 9.

[38]RAND Australia, Submission 1, p. 8.

[39]RAND Australia, Submission 1, p. 8.

[40]Dr Lauren Sanders, Committee Hansard, Canberra, 1 March 2024, p. 29.

[41]AHRC, Submission 8, p. 3, drawing on the International Committee of the Red Cross, ‘Autonomous Weapons Systems: Technical, Military, Legal and Humanitarian Aspects,’ Expert Meeting Report, 2014 p.7.

[42]AHRC, Submission 8, p. 5.

[43]Australian Human Rights Commission (AHRC), Submission 8, p. 1, drawing on the Qerim Qerimi, ‘Controlling Lethal Autonomous Weapons Systems: A Typology of the Positions of States’ (2023) 50 Computer Law and Security Review, 1, p. 1.

[44]AHRC, Submission 8, p. 14.

[45]AHRC, Submission 8, p. 13, drawing on Qerim Qerimi, ‘Controlling Lethal Autonomous Weapons Systems: A Typology of the Position of States’, Computer Law and Security Review, 50, (2023), pages 1, 8 and 11.

[46]RAND Australia, Submission 1, p. 7.

[47]RAND Australia, Submission 1, p. 7.

[48]RAND Australia, Submission 1, p. 7.

[49]International Committee of the Red Cross, ‘What is International Humanitarian Law?’ https://www.icrc.org/sites/default/files/document/file_list/what-is-ihl-factsheet.pdf, viewed 11 September 2024

[50]AHRC, Submission 8, p. 9, drawing on Australia, ‘National Commentary Lethal Autonomous Weapons Systems’ (National Commentary, Convention on Certain Conventional Weapons, August 2020) p. 2; see generally Sonia Chakrabarty, et al., ‘A Compilation of Materials Apparently Reflective of States’ Views on International Legal Issues Pertaining to the use of Algorithmic and Data-reliant Socio-technical Systems in Armed Conflict’ (Paper, Harvard Law School, 2020), p. 6.

[51]AHRC, Submission 8, p. 11, drawing on Australian Defence Force, Concept for Robotics and Autonomous Systems, 11 November 2020, p. [4.17].

[52]Defence, Submission 10, p 30.

[53]Defence, Submission 10, p. 30.

[54]Defence, Submission 10, p. 30.

[55]Defence, Submission 10, p. 31.

[56]Defence, Submission 10, p. 31.

[57]AHRC, Submission 8, p. 11.

[58]AHRC, Submission 8, p. 14.

[59]UK Parliament, House of Lords Library, AI in Weapon Systems Committee report: Proceed with caution, https://lordslibrary.parliament.uk/ai-in-weapon-systems-committee-report-proceed-with-caution/, viewed 19 July 2024.

[60]AHRC, Submission 8, p. 15.

[61]AHRC, Submission 8, p. 17.

[62]AHRC, Submission 8, p. 18.

[63]Professor Rain Liivoja, Committee Hansard, Canberra, 1 March 2024, p. 27.

[64]Professor Rain Liivoja, Committee Hansard, Canberra, 1 March 2024, p. 27..