- Introduction
Background to the inquiry
1.1On 12September2024, the Joint Committee of Public Accounts and Audit adopted an inquiry into the use and governance of artificial intelligence systems by public sector entities. The purpose of the inquiry was to examine the adoption and use of AI systems and processes by public sector entities to conduct certain functions, including but not limited to the delivery of services, to help achieve their objectives. This includes consideration of:
- the purposes for which AI is currently being used by the public sector entity and whether there are planned or likely future uses
- the existing legislative, regulatory and policy frameworks that are relevant to the use of AI and whether they are fit for purpose
- whether the internal governance structures that currently exist for AI will ensure its ethical and responsible use by public sector entities
- the internal framework/policies or additional controls used for assessing the risks associated with the use and possible misuse of AI, including the areas of security, privacy, ethics, bias, discrimination, transparency and accountability
- whether there is an adequate line of sight to the output of AI, and the decisions made through its use
- whether the public sector has the internal capability to effectively adopt and utilise AI into the future
- whether there are sovereign capability issues to consider given that most AI tools currently used in Australia are sourced from overseas
- any other related matters.
- The Australian National Audit Office (ANAO) noted in its summary of the results of the audit of the Consolidated Financial Statements of the Australian Government for 2022–23 that 36 Commonwealth entities had reported the adoption of some form of emerging technology such as AI, but also that no supporting policies or governance frameworks for this had been created in most cases.
- The Committee sought the assistance of both the ANAO and Digital Transformation Agency (DTA) to contact entities currently using AI for certain processes or functions. The Committee posed specific questions regarding the source of the AI technology being used, the specific purposes for which it had been adopted, and the regulatory and governance frameworks that would be used to ensure its effectiveness and to mitigate any potential risks to the Commonwealth. Responses were received from 40 entities and findings from the responses are cited in this report.
- ANAO emphasised in its report that although emerging technologies have the potential to provide innovative approaches and improvements, there are notable risks involved with their use, including ‘lack of transparency, bias and discrimination, security and privacy concerns, legal and regulatory challenges, misinformation, manipulation and unintended consequences’.
- ANAO further cites former guidance from DTA in its report, stating:
Appropriate governance structures are critical to achieving the ethical and responsible use of emerging technologies. Entities governance structures should consider usage of the technologies, an understanding of the operation of the technologies and consider both business and technology perspectives.
1.6An Organisation for Economic Co‑operation and Development (OECD) paper on the use of AI by governments and their readiness for the emerging technology, published in June2024, states that the strategic and responsible use of AI could potentially ‘transform how governments function, design policies, and provide services’ but commented that:
Governments have multiple roles in relation to AI, as enablers, funders, regulators, but also as users and in some cases as developers. While the global debate on AI has tended to focus on governments’ role as regulators in shaping and responding to the application of AI, less attention has been paid to their responsibilities as users of AI. As governments seize the opportunities of AI for better governance and deploy solutions in a broad range of policy areas, they recognise the need to govern AI in the public sector to prevent misuse and mitigate risks.
Existing parliamentary work
1.7This inquiry is one of several that have examined the use of AI in Australia. Fourother relevant inquires have been conducted by the Parliament of Australia.
- The House Standing Committee on Employment, Education and Training conducted an inquiry into the use of generative artificial intelligence in the Australian education system, the final report of which was published in August2024.
- The Senate Select Committee on Adopting Artificial Intelligence conducted an inquiry into adopting artificial intelligence, the final report of which was published in November2024.
- The Parliamentary Joint Committee on Intelligence and Security published its annual review of the administration and expenditure of six Australian intelligence agencies for the 2022–23 financial year in February 2025. The entities’ use of AI, machine learning and bio-intelligence was included as a focus area.
- The House Standing Committee on Employment, Education and Training conducted an inquiry into the digital transformation of workplaces, focusing on the rapid development and uptake of automated decision making and machine learning techniques in the workplace. The final report was published in February2025.
- State parliaments in two jurisdictions have conducted inquiries into AI.
- The South Australian Parliament’s Select Committee on Artificial Intelligence tabled its report into artificial intelligence in November2023.
- The New South Wales Parliament’s Portfolio Committee No. 1 – Premier and Finance tabled its report into artificial intelligence in New South Wales in July2024.
- The current inquiry differs from these inquiries through its focus on AI usage in the Australian public sector, with a particular focus on Australian Public Service (APS) entities.
Conduct of the inquiry
1.10The inquiry received 46 submissions (including one confidential submission) and 11supplementary submissions. Two submissions and 11 supplementary submissions contained responses to questions provided in writing by the Committee or taken on notice at public hearings. The Committee held two public hearings in Canberra and via videoconference on Friday, 15 November 2024 and Wednesday, 4December 2024.
1.11The list of submissions and supplementary submissions is at Appendix A. The public hearings are listed at Appendix B.
Definitions
1.12There is currently no whole-of-Government glossary of terms covering the field of AI, with different definitions presented in various frameworks. DTA’s policy for the responsible use of AI in government recommends that entities use the definition of AI systems adopted by the OECD:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
1.13Other jurisdictions have identified and defined sub-categories of AI systems. For example, the New South Wales Government splits AI into four subsets:
- Generative AI, which ‘creates new content such as text, images, voice, video, and code by learning from data patterns’. Examples of generative AI include ChatGPT and DALL-E.
- Machine learning, which ‘allows computers to autonomously learn and improve without being explicitly programmed. [Machine learning] algorithms are trained on data to make predictions or decisions’.
- Natural language processing, in which ‘algorithms are used to analyse text, comprehend, converse with users and perform tasks like language translation, sentiment analysis, and question answering’.
- Computer vision, in which ‘algorithms analyse images and videos for tasks like object detection, face recognition, and self-driving cars’.
- Similar AI categories were referred to by DTA in its submission to this inquiry. DTA used information it gathered with the AI in Government Taskforce in late 2023 to present four categories of AI used by Australian Government entities: predictive analytics, natural language processing, computer vision and audio recognition, and generative AI.
- Since late 2022, the rise of generative AI tools such as ChatGPT and Microsoft Copilot has made AI systems ubiquitous, and there has been rapid development in the field. However, AI systems have been in use in the public service for some time, as acknowledged by DTA in its submission to the inquiry. DTA commented that:
While the use of AI in government is not new, it has traditionally been limited to 'narrow' applications performing specific tasks within defined domains. This includes AI-enabled predictive analytics to identify patterns and relationships in large data sets.
International frameworks
European Union
1.16The European Union Parliament approved the Artificial Intelligence Act of 2024 on 13March 2024. The Act regulates the adoption of AI and places different requirements on providers and users depending on the level of risk posed by AI systems. For example, some forms of AI systems such as social scoring, which involves ‘classifying people based on behaviour, socio-economic status or personal characteristics’, are considered to pose an unacceptable risk and will be banned. The Act will be fully applicable in 2026, with shorter timeframes in place for some provisions.
United Nations
1.17In late 2023, the United Nations established a High-Level Advisory Body on Artificial Intelligence. The body comprised up to 32 experts from around the world and was designed to gather information and provide different perspectives.
1.18The body’s final report, ‘Governing AI for Humanity’, was released in September2024. The report recommends further global cooperation to fill gaps in governance coverage and ensure a common understanding.
United Nations Educational, Scientific and Cultural Organisation (UNESCO)
1.19In November 2021, UNESCO member states including Australia adopted the Recommendation on the Ethics of Artificial Intelligence. The document articulates a series of values and principles and identifies 11 areas of policy action covering themes such as data policy, gender, culture, and ethical governance and stewardship.
1.20The main recommended action is for member states to establish effective measures to embed ethics in their use of AI, including policy frameworks, and ensure compliance.
Organisation for Economic Co-operation and Development (OECD)
1.21The OECD Principles on Artificial Intelligence were adopted in May 2019. Australia is an adherent. Parties adhering to the document are recommended to promote and implement five principles: inclusive growth, sustainable development and well-being; respect for the rule of law, human rights and democratic values, including fairness and privacy; transparency and explainability; robustness, security and safety; and accountability.
Report outline
1.22This report is structured as follows:
- Chapter 1 – Introduction
- Chapter 2 – Artificial intelligence in the public sector
- Chapter 3 – Policy and regulatory environment
- Chapter 4 – A way forward: the Committee’s view
- Chapter 2 provides context on how AI systems are currently being used in the Australian public sector, with a particular focus on automated decision-making. The chapter will also examine the risks of AI systems and evidence presented to the inquiry on specific aspects of public sector capability.
- Chapter 3 describes the current policy and regulatory environment in Australia, including governance arrangements, relating to the public sector’s use of AI systems. The chapter also examines proposed approaches under development by the Australian Government and approaches recommended by inquiry participants.
- Chapter 4 summarises the Committee’s views on the information gathered during the inquiry, with recommendations intended to ensure that the public sector can make the best use of AI systems supported by appropriate regulation and oversight.