Chapter 1 - Introduction and background

Chapter 1Introduction and background

Introduction

1.1On 26 March 2024, the Senate resolved that a select committee, to be known as the Select Committee on Adopting Artificial Intelligence (AI), be established to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia, including consideration of:

(a)recent trends and opportunities in the development and adoption of AI technologies in Australia and overseas, in particular regarding generative AI;

(b)risks and harms arising from the adoption of AI technologies, including bias, discrimination and error;

(c)emerging international approaches to mitigating AI risks;

(d)opportunities to adopt AI in ways that benefit citizens, the environment and/or economic growth, for example in health and climate management;

(e)opportunities to foster a responsible AI industry in Australia;

(f)potential threats to democracy and trust in institutions from generative AI; and

(g)environmental impacts of AI technologies and opportunities for limiting and mitigating impacts.[1]

1.2The committee was required to report on or before 19 September 2024.[2]

Extension of the inquiry and interim report

1.3However, on 17 September 2024, the Senate agreed to extend the committee’s reporting date to 26 November 2024 to allow the inquiry to consider any impacts of generative AI on the federal election in the United States (US). The US election was subsequently held on Tuesday, 5 November 2024.

1.4On 10 October 2023, the committee tabled an interim report setting out the evidence received by the inquiry regarding:

the potential for AI technology, and particularly generative AI, to influence electoral processes and undermine public trust and confidence in Australian democracy more generally; and

policy options for mitigating the risks of AI technology in relation to electoral and democratic processes.

1.5The committee’s majority interim report made five recommendations relating to the use of AI in electoral and political contexts.[3]

Impact of AI on US election

1.6Following the US election, the committee notes that AI appears not to have had a significant impact on the course or outcome of the electoral contest, and there were relatively few reports of the use of deepfakes or other AI-generated content designed to sow political disinformation or influence the minds of voters.

1.7However, the committee notes that there were significant instances of disinformation employed in the US election, including content identified as emanating from Russia as part of a concerted effort continuing that country’s attempts to disrupt and influence foreign elections.

1.8While some of the more notable incidents—including fake videos circulated on social media platforms purporting to show ballot fraud and hoax bomb threats called into polling places—did not involve AI-generated content, they nevertheless demonstrate that the use of disinformation in electoral and political contexts remains a significant concern.

1.9Further, the committee considers it critical that Australia continue to monitor the use and impact of AI-generated deepfakes and content on elections to identify policy and legislative responses that can maintain and bolster trust in democratic processes and institutions, while protecting free speech. In this regard, the committee re-endorses the recommendations of its interim report as practical steps for the government to undertake to address the risks posed by AI to democracy.

1.10Further, given the likelihood that such efforts to promote widespread disinformation in the context of electoral contests will continue, the committee emphasises the importance of ensuring that social media platforms are held accountable for the content that they publish.

Conduct of the inquiry

1.11Details of the inquiry were made available on the committee's webpage, and organisations, key stakeholders and individuals were invited to provide submissions.

1.12The committee received 245 public submissions, which are listed in Appendix1 of this report, and held the following public hearings:

20 May 2024, in Canberra;

21 May 2024, in Sydney;

16 July 2024, in Canberra;

17 July 2024, in Canberra;

16 August 2024, in Canberra; and

11 September 2024, in Canberra.

1.13A list of the organisations and individuals who attended as witnesses at these public hearings is in Appendix 2. Public submissions, additional information received by the committee and Hansard transcripts are all available on the committee's website.[4]

Acknowledgements

1.14The committee thanks all individuals and organisations who have contributed to the inquiry by making written submissions, providing additional information, and appearing at public hearings.

Notes on references

1.15References to the Committee Hansard may be references to a proof transcript. Page numbers may differ between proof and official transcripts.

1.16Citations have been omitted from material quoted throughout the report.

Report structure

1.17This report is structured as follows:

Chapter 1 – Introduction and background;

Chapter 2 – Regulating the AI Industry in Australia;

Chapter 3 – Developing the AI industry in Australia;

Chapter 4 – Impacts of AI on industry, business and workers;

Chapter 5 – Automated decision-making; and

Chapter 6 – Impacts of AI on the environment.

Definitions

1.18This section describes some of the concepts and definitions used in this report.

1.19The term ‘artificial intelligence’ or ‘AI’ is a broad term that has expanded to cover a diverse range of technologies. The submission of Xaana.Ai noted:

...the term "artificial intelligence" (AI) has become a buzzword, encompassing a vast and often ambiguous range of technologies. What was once described as "big data" or "predictive analytics" can now be readily rebranded as AI, b[l]urring the lines between distinct concepts. Additionally, confusion arises from the tendency to conflate AI with automation.[5]

1.20This report uses the following definitions:[6]

AI Technologies

1.21Artificial intelligence (AI): an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation, and include many relatively commonplace systems that may not have been previously widely recognised as employing AI, including:

computer vision (where computers are able to identify and understand objects and people in images or videos);

computer voice recognition using machine learning;

aircraft and vehicle autopilot;

weather forecasting;

Netflix recommendations;

Google and social media advertising algorithms;

game systems that play, for example, chess or Alpha Go; and

surgery robots.[7]

1.22Amazon Web Services describes AI as ‘a computer science discipline that enables software to solve novel and difficult tasks with human-level performance’.[8]

1.23Artificial General Intelligence (AGI): a field of AI research that attempts to create systems with human-like intelligence and the ability to self-teach. AGI systems are intended to perform tasks without being trained by humans, and would be recognised in popular culture as the types of AI portrayed in movies such as TheTerminator or I, Robot. Amazon Web Services describes AGI as a system that:

…can solve problems in various domains, like a human being, without manual intervention. Instead of being limited to a specific scope, AGI can self-teach and solve problems it was never trained for. AGI is thus a theoretical representation of a complete artificial intelligence that solves complex tasks with generalized human cognitive abilities.[9]

1.24General purpose AI systems (aka foundation models): AI systems that have a wide range of possible uses, both intended and unintended by the developers. General-purpose AIs are increasingly commercially useful as they can be applied to tasks in various fields, often without substantial modification and fine-tuning. They are also referred to as foundation models due to their widespread use as pre-trained models for other, more specialised, AI systems. For example, a single general purpose AI system for language processing can be used as the foundation for chatbots, ad generation, decision assistants or translation systems. Examples of general purpose AI systems include AlphaStar, Chinchilla, Codex, DALL-E 2, Gopher and ChatGPT-4.[10]

1.25Frontier AI models: general purpose AI systems with capabilities that could severely threaten public safety and global security. For example, AI systems that could be used for designing chemical weapons, exploiting vulnerabilities in safety-critical software systems, synthesising persuasive disinformation at scale, or evading human control.[11]

1.26AI tech stack: the infrastructure and technologies or building blocks that comprise an AI system including the telecommunications industry and networks; computing and storage infrastructure; technology; and applications and interfaces that deliver AI services or products to a consumer (for example, ChatGPT-4).[12]

Applications

1.27Machine learning: patterns derived from training data using machine learning algorithms which AI systems can apply to new data for prediction or decision-making purposes.

1.28Generative AI: AI models that generate novel content such as text, images, audio and code in response to prompts. ChatGPT-4 is an example of generative AI.[13] Generative AI technologies are built on large language models (see following definition) trained on large amounts of data to provide outputs that can be human-like. Generative AI is increasingly user-friendly and able to automate various tasks quickly to generate content.[14]

1.29Large language model (LLM): a type of AI program that, using machine learning techniques applied to large sets of data, specialises in the recognition and generation of human-like text.

1.30Multimodal foundation model (MfM): a type of generative AI that can process and output multiple data types (for example, text, images and audio).

1.31Automated decision making (ADM): the application of automated systems in any part of a decision-making process. ADM includes using automated systems to:

make the final decision;

make an interim assessment or decision leading up to the final decision;

recommend a decision to a human decision-maker;

guide a human decision-maker through relevant facts, legislation or policy; and

automate aspects of the fact-finding process which may influence an interim decision or the final decision.

1.32While ADM systems may or may not employ AI, for the purposes of this report ADM is broadly understood as engaging the inquiry’s terms of reference. As noted in the Australian government’s June 2023 AI discussion paper, even where ADM ‘does not use AI technologies, [the] risks and challenges associated with ADM may also be mitigated by’ policies in relation to AI.[15]

AI models

1.33Some examples of AI models currently available for public use are listed below.

1.34ChatGPT-4: a generative AI model that can be described as an AI chatbot. ChatGPT-4’s developers describe it as employing a dialogue format to answer questions and interact in a ‘conversational way’.[16] Outputs from ChatGPT-4 can be used for writing tasks such as writing emails, essays and computer code.

1.35Gemini: a multimodal generative AI model developed by Google capable of text, audio, image and video outputs. Google has implemented forms of Gemini across its products including Pixel phone and Google search and Chrome.[17]

1.36Claude: a generative AI model developed by Anthropic using constitutional AI, which embeds a list of human values in its parameters. Claude is marketed as an AI assistant.[18]

1.37Meta AI: a generative AI model built using Meta Llama 3, an open-source large language model. Meta AI presents as an assistant and is integrated through Meta products such as Facebook, Instagram and Messenger.

Key characteristics of AI

1.38AI shares many similarities with other new technologies. However, the current state of AI technology has distinct characteristics that create both the opportunities for its widespread adoption and the inherent risks that are, together, the central focus of the committee’s inquiry into the adoption of AI.[19] The submission of the Department of Industry, Science and Resources identified the following key characteristics of AI:

Adaptability and learning: AI systems can improve their performance over time and adapt by learning from data. As AI has become capable of generating data and even programming code, it has also become a creator of information, technology and imagery;

Autonomy: AI systems can be designed to make decisions autonomously (without human intervention);

Speed and scale: AI has an unparalleled capacity to analyse massive amounts of data in a highly efficient and scalable manner. It also allows for real-time decision-making at a scale that can surpass the capabilities of traditional technologies;

Opacity: Decisions made by AI systems are not always traceable, and humans cannot always obtain insights into the inner workings of algorithms;

High realism: The advancement of AI and particularly generative AI has reached a point where AI can emulate human-like behaviours in some tasks; and create such realistic outputs that end-users find it difficult to identify whether they are interacting with AI or a human, or whether outputs are AI- or human-generated;

Versatility: AI models are a multipurpose technology that can perform tasks beyond their intended uses, even when deployed for a general or specific purpose; and

Ubiquity: AI, particularly generative AI, has become a readily accessible and increasingly dominant part of our everyday lives and continues to be developed and adopted at an unprecedented rate.[20]

Use of AI in Australia

1.39The committee’s inquiry occurs in a context of heightened public interest in AI technology, much of which followed the release in November 2022 of ChaptGPT. However, despite the relatively recent interest in more widely accessible generative AI models, AI has been employed over recent years in various aspects of the Australian society and economy to deliver significant benefits. This includes, for example:

using AI to consolidate large amounts of patient data to support diagnosis and early detection of health conditions;

AI tools to help evaluate and optimise engineering designs to improve building safety;

using AI to expedite travel at airports through the use of SmartGates;

using AI to support personalised learning and teaching in remote areas; and

AI-enabling improvements and cost savings in the provision of legal services.[21]

1.40The submission from the Department of Home Affairs (DHA) noted that ‘products and services that utilise AI are already broadly in use across the Australian economy’, and summarised these as being generally in relation to:

decision making (ADM): machine-based systems that make predictions, recommendations or decisions based on a given set of human defined objectives (see above definition);

content curation or recommendations: systems that prioritise content or make personalised content suggestions to users of online services; and

generative AI: sophisticated machine learning algorithms used to predict an output, such as images or words, based on a prompt (see above definition).[22]

1.41The Digital Transformation Agency (DTA) observed that past uses of AI by government have typically been ‘in the form of narrow applications that perform specific tasks within defined domains’, with the technical expertise and costs of deploying and operating AI forming a ‘natural barrier to adoption for many agencies.’[23] More recently, however, there has been a rapid development driven by generative and general purpose AI:

Generative AI has changed this and brought AI to the masses with large language models such as ChatGPT being widely accessible, easy to use and interact with, while also delivering outputs that often require no technical expertise.[24]

1.42The DHA submission noted that the development of AI products and services in Australia is ‘rapidly accelerating’ and that ‘significant investment by industry and governments is driving unprecedented advancements in AI’.[25]

1.43However, the Australian government’s June 2023 AI discussion paper observed that, relative to other countries, adoption rates of AI across Australia remain relatively low’, due in part to low levels of public trust and confidence of Australians in AI technologies and systems.‘[26] It concluded:

Building public trust and confidence in the community will involve a consideration of whether further regulatory and governance responses are required to ensure appropriate safeguards are in place. A starting point for considering any response is an understanding of the extent to which our existing regulatory frameworks provide these safeguards. These existing regulations include our consumer, corporate, criminal, online safety, administrative, copyright, intellectual property and privacy laws.[27]

Footnotes

[1]Journals of the Senate, No.107, 26 March 2024, pp 3208-3209.

[2]Journals of the Senate, No.107, 26 March 2024, p. 3209.

[3]The committee’s interim report is available at: Interim Report – Parliament of Australia.

[4]The website for the Select Committee on Adopting Artificial Intelligence (AI) is available at: https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI.

[5]Xaana.Ai, Submission 167, p. 5.

[6]The following definitions are largely based on those used in the Australian Government, Department of Industry, Science and Resources (DISR), Safe and responsible AI in Australia, Discussion Paper, Figure 1.1, June 2023.

[7]Yorick Wilks, Artificial Intelligence: Modern magic or dangerous future?, 2023, UniPress Ltd, London, pp11, 50 and 128; IMB, ‘Understanding the different types of artificial intelligence’, October 2023, https://www.ibm.com/think/topics/artificial-intelligence-types (accessed 13 August 2024); Google Deepmind, ‘GraphCast: AI model for faster and more accurate global weather forecasting’, November 2023, GraphCast: AI model for faster and more accurate global weather forecasting - Google DeepMind (accessed 13August 2024).

[8]Amazon Web Services, What is AGI (Artificial General Intelligence)?https://aws.amazon.com/what-is/artificial-general-intelligence/#aws-page-content-main (accessed 28 July 2024).

[9]Amazon Web Services, What is AGI (Artificial General Intelligence)?https://aws.amazon.com/what-is/artificial-general-intelligence/#aws-page-content-main (accessed 28 July 2024).

[10]Future of Life Institute, General Purpose AI and the AI Act, May 2022, p. 3.

[11]Future of Life Institute, General Purpose AI and the AI Act, May 2022, p. 3.

[12]Professor Genevieve Bell, AO, Vice-Chancellor and President, Australian National University; Founder and Inaugural Director, School of Cybernetics, Australian National University, Committee Hansard, 20 May 2024, p. 34.

[13]Law Council of Australia, Submission 152, p. 7.

[14]Law Council of Australia, Submission 152, p. 7

[15]DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, p. 6.

[16]Open AI, ‘Introducing ChatGPT’, 30 November 2023, https://openai.com/index/chatgpt/?_sm_vck=sPsPsj6FH6Jq5WSFrN5wtn52BPHn76QTQ5tS632nMQrMj3T77Bn2 (accessed 30August2024.

[17]Google, ‘Introducing Gemini: our largest and most capable AI model’, 6 December 2023, https://blog.google/technology/ai/google-gemini-ai/#sundar-note (accessed 5 September 2024).

[18]Claude, ‘Meet Claude’, 2024, https://claude.ai/login?returnTo=%2F%3F (accessed 5September2024).

[19]DISR, Submission 160, pp 3-4.

[20]DISR, Submission 160, pp 3-4.

[21]Australian Government, DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, pp3and 7.

[22]Department of Home Affairs, Submission 55, p. 2.

[23]Digital Transformation Agency (DTA), Submission 53, p. 2.

[24]DTA, Submission 53, p. 3.

[25]DOH, Submission 55, p. 3.

[26]Australian Government, DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, p. 3.

[27]Australian Government, DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, p. 3.