Chapter 2 - Artificial intelligence in the public sector

  1. Artificial intelligence in the public sector
    1. This chapter examines the evidence received by the inquiry on how Artificial intelligence (AI) is currently being used in the Australian public service. The following areas are examined:
  • examples of how AI is being used in the public service
  • risks of AI
  • areas of concern raised by inquiry participants including automated decisionmaking, record keeping and procurement
  • public service capability.

Public service AI use

2.2As indicated in Chapter 1, the Committee sought additional information on public service entities’ use of emerging technologies as part of the Inquiry into Commonwealth Financial Statements 2022–23. 14 questions were sent to entities and responses were received from 40 entities. A summary of trends from the responses relating to areas covered in this chapter is included below.

  • 24 of the 40 entities advised that they are using generative AI, with three of these 24 entities using ChatGPT in addition to or instead of Microsoft Copilot.[1]
  • Seven of the 40 entities advised that they had specifically blocked or restricted access to public generative AI tools on their networks.[2]
  • Eight of the 40 entities advised that some or all of their AI capabilities were developed in-house.[3] Responses indicated that the entities retained the rights to the code for in-house AI capabilities.
    1. This section contains more detail on specific ways in which public service entities are using AI systems.

Copilot trial

2.4On 16 November 2023, the Prime Minister announced that the Australian Government would conduct a six-month trial of Microsoft 365 Copilot (Copilot) by Australian Public Service (APS) entities.[4] The trial ran from 1January2024 to 30June2024. The Digital Transformation Agency (DTA) coordinated the trial and was supported by the AI in Government Taskforce. The trial was focused on the use of generative AI in the public service, as described by DTA:

The purpose of the trial was to explore the safe, responsible, and innovative use of generative AI by the APS. The trial aimed to uplift capability across the APS and determine whether Copilot, as an example of generative AI:

  • could be implemented in a safe and responsible way across the government
  • posed benefits and challenges/consequences in the short and longer term
  • faced barriers to broader adoption that may require changes to how the APS delivers on its work.[5]
    1. In response to a question on notice during this inquiry, DTA provided more detail on the results of the trial, advising that:

The overarching finding was a perceived improvement to efficiency and quality for summarising, drafting and searching for information. Agency adoption requires a concerted effort to address different barriers and improve usage, as well as the monitoring of longer-term costs and risks.[6]

2.6DTA has published the evaluation report of the Australian Government trial of Copilot.[7] Findings are summarised in the report under four categories and key findings are presented below for reference:

  • Employee-related outcomes
  • One in three participants used Copilot daily.
  • 75 per cent of participants who received three or more forms of training were confident in using Copilot, 28 percentage points higher than participants who received a single form of training.
  • Microsoft Teams and Word were the most frequently used products.
  • Identifying specific use cases for Copilot may lead to increased use.
  • Productivity
  • 69 per cent of survey respondents agreed that Copilot improved the speed they completed tasks.
  • 61 per cent agreed that Copilot improved their work’s quality.
  • 40 per cent of survey respondents reported they could reallocate time for higher-level strategic and staff engagement tasks.
  • ‘Quality gains were more subdued relative to efficiency gains’.
  • Whole-of-government adoption of generative AI
  • Existing deficiencies in data security and information management may be magnified.
  • There are capability barriers involved such as prompt engineering and identifying use cases for Copilot.
  • There was a lack of clarity regarding legal areas such as disclosing the use of Copilot and the applicability of Freedom of Information requirements to verbatim meeting transcriptions generated by AI.
  • Unintended outcomes
  • Participants expressed concerns regarding how generative AI may affect APS jobs and skills needs; the potential misuse of cultural data; and vendor lockin.[8]
    1. The trial’s evaluation report made eight recommendations grouped under three themes: detailed and adaptive implementation, encourage greater adoption and proactive risk management.
  • Agencies should consider which generative AI solution are most appropriate for their overall operating environment and specific use cases, particularly for AI Assistant Tools.
  • Agencies must configure their information systems, permissions, and processes to safely accommodate generative AI products.
  • Agencies should offer specialised training reflecting agency-specific use cases and develop general generative AI capabilities, including prompt training.
  • Effective change management should support the integration of generative AI by identifying ‘Generative AI Champions’ to highlight the benefits and encourage adoption.
  • The APS must provide clear guidance on using generative AI, including when consent and disclaimers are needed, such as in meeting recordings, and a clear articulation of accountabilities.
  • Agencies should conduct detailed analyses of workflows across various job families and classifications to identify further use cases that could improve generative AI adoption.
  • Agencies should share use cases in appropriate whole-of-government forums to facilitate the adoption of generative AI across the APS.
  • The APS should proactively monitor the impacts of generative AI, including its effects on the workforce, to manage current and emerging risks effectively.[9]
    1. DTA’s submission to this inquiry outlines the actions it is taking to implement the trial’s recommendations. Some of these actions include the development or piloting of new policies and frameworks such as the Policy for the responsible use of AI in government and the Australian Government AI assurance framework.[10]
    2. Regarding the future adoption of Copilot across the public service, DTA advised in its submission that:

In line with DTA’s neutrality and role in managing whole-of-government digital procurement arrangements, the report does not endorse future adoption of Copilot specifically. Consistent with the trial participation conditions, agencies will need to undertake their own assessments as part of any future procurements to ensure the product meets their needs and provides value for money for their agency.[11]

Other initiatives

2.10Some submitters to the inquiry noted that information is not currently available on the extent of AI use in the public service. For example, Dr Farida Akhtar identified an absence of data in this area and expressed the view that transparency with appropriate privacy and need-to-know safeguards could increase public trust, respect and inclusivity in AI adoption.[12]

2.11The Community and Public Sector Union (CPSU) also identified that there is not ‘a clear APS-wide picture of the extent that AI has been deployed in the Public Service’.[13] From August to October2024, CPSU ran a survey to gather insights from APS workers and 12percent of the respondents advised that they use AI to assist with their work.[14]

2.12Examples of types of AI systems used by public sector entities and other work underway were provided by a number of entities participating in the inquiry.

Australian Taxation Office (ATO)

2.13The ATO advised that it:

has significant experience using AI safely and responsibly to assist in its administration of the tax and superannuation systems. This includes using AI to review large quantities of unstructured data, generate risk models to identify potential non-compliance, provide real time prompts to taxpayers and draft and edit communications.[15]

Department of Finance

2.14The Department of Finance advised that it uses AI systems to support meeting administration, including minutetaking, and to reword public-facing documents into an easyread format.[16]

Department of Home Affairs

2.15The Department of Home Affairs (Home Affairs) provided two categories of AI system use: advanced analytics and productivity. Advanced analytics examples included:

  • predicting risk in the Visa program
  • disrupting the flow of illicit goods in international mail and cargo domain
  • identifying fraudulent documents and illicit goods using computer vision techniques
  • generating intelligence and investigative leads with predictive, descriptive and network graph analytics
  • extracting entity information from unstructured text using Natural Language Processing
  • disrupting fraudulent travel to Australia offshore.[17]
    1. Productivity examples provided by the department included:
  • robotic process automation solutions for processing (eg, Freedom of Information requests)
  • Copilot (Generative AI) trial to support office productivity, such as summarising meetings
  • some productivity systems include low risk AI functionality that is carefully governed. For example, the SmartGates used at the border to streamline and simplify travellers to enter Australia use a combination of AI-enabled matching of faces to official documentation, which is complemented by rules-based decision making, where anyone not automatically allowed through is referred to human review.[18]

Australian Sports Commission

2.17The Australian Sports Commission advised that it developed a competition analysis system in partnership with Swimming Australia prior to the Tokyo 2020 Olympic and Paralympic Games.[19] The Commission is also partnering with the Commonwealth Scientific and Industrial Research Organisation (CSIRO) to develop the Responsible AI in Sport Position Paper.[20]

Australian Bureau of Statistics (ABS)

2.18The ABS advised that it has used AI in several areas:

  • it developed a system using machine learning technology to automate the coding of occupations
  • it conducted a trial using ChatGPT to suggest occupation tasks as part of its review of the Australian and New Zealand Standard Classification of Occupations
  • it is trialling the use of AI to convert a specific type of code from one format to another and is trialling AI coding assistants for application developers.[21]

Department of Agriculture, Fisheries and Forestry (DAFF)

2.19DAFF advised that it uses AI in the following areas:

  • Natural Language Processing (NLP) to detect specific biosecurity risks from a limited set of imported cargo goods descriptions; and to analyse free text comments as part of a business quality assurance process. Outputs are not directly executed by software but are provided to inform regulatory officers of any potential biosecurity risk
  • land use maps production, undertake pattern recognition, perform predictive modelling to characterise farm performance and identify biosecurity risk. Predictive modelling techniques are expected to support commodity forecasting to manage biosecurity risk and understand changes to inform export market access.[22]

Services Australia

2.20Services Australia advised that it is using the following types of AI systems:

  • optical character recognition to extract information written on forms and letters, which staff then compare to the source document
  • interactive voice response, which is used in call centres to route calls, or to transcribe customer calls
  • chatbots, which take the form of Digital Assistants to respond to customer queries.[23]

Commonwealth Scientific and Industrial Research Organisation (CSIRO)

2.21CSIRO, a corporate Commonwealth entity, provided information in its submission on its AI research and applications that it has created. An example is Spark, described as ‘a wildfire simulation toolkit for researchers and experts in the disaster resilience field which uses AI in combination with other technologies to predict the path of bushfires’.[24]

2.22CSIRO spoke further, at the 15 November 2024 public hearing, on the possibilities the organisation sees in AI, stating that:

The excitement is around the possibility that they allow CSIRO to solve problems which were hitherto beyond our reach. We're seeing some evidence of that with our work on solar panels, for example—flexible solar panels. We hit record levels of efficiency of converting sun energy into electricity because that team used machine learning and robotics to achieve that.[25]

Risks

2.23In its submission to the inquiry, the Australian National Audit Office (ANAO) drew attention to some of the risks that may be posed by the adoption of emerging technologies such as AI in the public service:

Risks that could be faced by entities include a lack of transparency in decision making or processing, bias and discrimination, security and privacy concerns, legal and regulatory challenges, misinformation, manipulation and unintended consequences. Rapid developments and the evolving nature of emerging technologies, including AI, could further heighten these risks.[26]

2.24Some of these risks are examined below.

Bias

2.25The Australian Human Rights Commission (AHRC) has published a technical paper on algorithmic bias and AI and defines the term as:

… predictions or outputs from an AI system, where those predictions or outputs exhibit erroneous or unjustified differential treatment between two groups. The differential treatment may be erroneous because of mistakes or problems in the AI system, or it may be unjustified because it generally raises questions of unfairness, disadvantage or inequality that cannot be justified in the circumstances.[27]

2.26DTA’s Interim guidance on government use of public generative AI tools provides further information on the capacity for bias to affect the outputs of generative AI tools, stating that:

Generative AI tools are typically trained on broad sets of data that may contain bias. Bias can arise in data where it is incomplete, unrepresentative or reflects societal prejudices.

Generative AI tools may reproduce biases present in the training data, which could lead to misleading or unfair outputs. Bias in outputs may disproportionally impact some groups, such as First Nations people, people with disability, LGBTIQ+ communities and multicultural communities.[28]

2.27The Australian Academy of Science (AAS) raised additional types of bias in its submission, stating that the use of AI systems ‘needs to balance the benefits to Australia against risks such as international biases, security, privacy, ethics, and accountability’.[29]

2.28A theme that emerged during the inquiry was the role that public service entities have in monitoring AI models and algorithms for bias. The Attorney-General’s Department(AGD) noted that, due to the self-learning nature of some models, bias could be gradually introduced, which increases the importance of monitoring outputs to verify and validate them.[30]

2.29The importance of continuous monitoring was also raised by AHRC, which advised that when managing bias:

… it's not a set and forget. You can't deal with the problem at one point and then assume it won't potentially arise later on. There needs to be an approach that continuously assesses potential bias and reflects on how products are performing and audits that performance to ensure bias doesn't creep in.[31]

2.30Infosys presented a different view on bias in its submission, emphasising the importance of training AI models with positive biases to reach ethical outcomes, stating:

Every aspect of human life entrenches biases of various forms, many of which are intrinsic to good order and the basic functioning of a society. Indeed, a bias towards fairness over expediency would, in most societies, be considered a positive ethical outcome. While these propensities are not often ‘felt’ as they are so intrinsic to our respective cultures, they are, nonetheless, biases. AI models do not benefit from this moral and ethical context and cannot make such determinations without the human input of such positive biases through training or other technical measures.[32]

2.31Discussion also went to the risk of automation bias developing in the public service, leading to assumptions that AI systems should replace human judgement. AHRC commented further on this, stating that:

…it really does highlight, again, the importance of embedding human rights throughout the entire life cycle of these processes and making sure that we don't allow automation bias to take over, where we assume that, because machines are more efficient and they're assumed to be more objective, they should replace human judgement, because there are times—particularly when decisions are going to have such a significant effect on individual lives—when human judgement is actually what's called for.[33]

Data security and sovereignty

2.32APS entities hold a range of datasets and interact widely with individuals and businesses, both of which were presented by DTA as elements contributing to the ‘unique position’ of the Australian Government in the community.[34]

2.33Inquiry participants identified data security as a key area of risk that may affect the public service’s use of AI systems.[35] The Australian Signals Directorate (ASD), in response to questions on notice, noted a specific risk that ‘AI tools may challenge the protection of sensitive data, for example, generative AI tools that produce or summarise text may not guarantee data privacy if it is fed sensitive or proprietary data’.[36]

2.34Data security was also discussed in the context of many current AI systems being produced and operated outside of Australia, particularly large language models. CSIRO provided a broad overview of data sovereignty and the high-risk nature of AI systems dealing with sensitive data, advising that:

Sovereign AI capability systems also encompass data security. AI systems vacuum-up vast quantities of data. They need this data to work, but if the data is sensitive, private or confidential, this can cause concerns for citizens or governments about whether the data is secure.

2.35A common recommendation from inquiry participants was for sensitive data to be stored and managed in Australia. For example, CSIRO commented that:

If the models are built (trained/tuned) and operated within Australia, then data sovereignty is improved. It is important to address the need for data sovereignty and ensure Australian data is stored and processed within national borders when necessary (data locality).[37]

2.36The storage and management of sensitive data in Australia was also supported by AAS, which stated in its submission that:

Investing in domestic data storage centres to support Australia’s AI development is essential to ensure sovereignty over the control of sensitive data and to enhance the responsible use of AI within the public sector.

The ability to store and manage sensitive data in Australia is an important national security requirement, now and in the future, which needs to be tightly defined and minimised where possible to strike an appropriate balance between national security and open research collaboration.[38]

Cybersecurity threats

2.37ASD has identified four categories of cybersecurity threats that relate to AI:

  • Threats from AI: AI can be used to enhance existing cyber threats and to enable new threat vectors. For example, a malicious actor could use generative AI to enhance spear phishing material.
  • Threats to AI: As digital systems, AI models themselves can be the target of cyberattacks. For example, AI used for malware classification could be disrupted to enable access by a malicious actor.
  • Accidental threats: AI can threaten cyber-security inadvertently. For example, a bug in an AI model could reveal data entered by one use to another.
  • Threats via AI: AI models and associated datasets and files can be used as a vector for cyberattacks. For example, malicious code be hidden in an open source AI model that users then download.[39]
    1. Regarding the mitigation of AI-related cybersecurity threats, DTA advised that:

… the draft guidance for the draft Australian Government AI Assurance Framework directs agencies to implement the Protective Security Policy Framework (PSPF) and consider the recommended mitigations contained within the ASD's Engaging with Artificial Intelligence guidance.[40]

2.39Home Affairs administers the PSPF and advised in response to a question on notice that ‘PSPF Release 2025 will specifically address the use of artificial intelligence and the risks it poses to the Australian Government’s security environment’.[41]

Areas of stakeholder concern

Automated decision-making

2.40In the Automated Decision-making Better Practice Guide, the Commonwealth Ombudsman defines an automated system as:

… a computer system that automates part or all of an administrative decisionmaking process. The key feature of such systems is the use of preset logical parameters to perform actions, or make decision, without the direct involvement by a human being at the time of decision.[42]

2.41The guide, currently under review, outlines circumstances where automated systems can be used in administrative decision-making, which includes making a decision or a recommendation to the decision-maker.[43]

2.42The Australian Research Council Centre of Excellence for Automated DecisionMaking and Society (ADM+S) observed that while there is not a clear distinction between automation and AI, ‘whether it involves AI or not, public sector automation can significantly affect citizen’s rights and good public sector administration — and in similar ways’.[44] For example, ADM+S noted that, due to the absence or classification of relevant information regarding automated decisionmaking systems and decisions made by them, individuals may be rendered unable to exercise their right to request information from government entities.[45]

2.43During the inquiry, automated decision-making was often discussed with reference to Robodebt. While AI systems were not used in this case, inquiry participants observed that Robodebt demonstrates the risks of automated decision-making and its potential negative impact on the Australian public.[46]

2.44An area of concern among submitters to the inquiry was the increased risk posed when automated decision-making and AI are combined. The Commonwealth Ombudsman’s submission to the inquiry stated that:

AI has the potential to significantly increase the pace of administrative decision making. If there are errors in these decisions, for example because the AI has been trained with an incorrect understanding of the law, there could be a very high volume of incorrect decisions made - with consequential detrimental, unfair and/or unlawful impacts upon people. Those decisions would need to be reversed, and those impacts would need to be remediated.[47]

2.45Submitters to the inquiry also noted that the use of AI may increase the level of difficulty involved in entities producing reasons for decisions. The NSW Council for Civil Liberties commented on this risk, advising that the ‘black box’ nature of AI systems that use machine learning and neural networks make decisionmaking more complex and opaque.[48]

2.46ATO raised a similar point in its submission, stating that ‘as AI becomes more powerful, the ability of humans to explain AI determinations has generally diminished, and ensuring there is human oversight and/or decision-making has subsequently become more important’.[49]

2.47Inquiry participants presented different ways to mitigate the risks presented by automated decision-making. For example, CPSU recommended it be mandated that staff employed under the Public Service Act 1999 oversee the establishment of parameters for automated decision-making, adding that ‘while AI processes can make recommendations, the final decision should always rest with an APS employee’.[50]

2.48AHRC spoke on various matters relating to the use of AI in automated decision making and emphasised the importance of having an informed human in the loop:

Individuals also need to be made aware when a decision which affects them materially uses AI. This will improve transparency and allow the public to understand and build trust in the broader adoption of AI technologies and products. Where AI is used to inform a decision which affects a person, the decision-maker must be able to generate reasons for that decision. If the AI tool used to arrive at that decision is unable to do so, we simply should not be using that product.

This also requires that an informed human in the loop be present. It's important to stress here the need for not just a human in the loop but an informed human in the loop—somebody who's able to understand and actively scrutinise AI informed outcomes and also understand the limitations of the technology that's being used.[51]

2.49Discussion also went to preparations that would need to be made for when the use of AI causes errors. In its submission, the Commonwealth Ombudsman stated that ‘remediation can be extremely resource and time-consuming for agencies, but agencies that use AI will need to be willing and able to embrace the need to remediate’.[52]

2.50A framework for the use of automated decision-making by government entities is being prepared by AGD. AGD advised in response to a question on notice from the Committee that:

To date, the department has undertaken consultation within government agencies; researched and analysed the legal, policy and technical issues; and commenced a public consultation process on 13 November 2024, which will remain open until mid-January 2025.[53]

2.51AGD further advised, at the 15 November 2024 public hearing, that the consultation paper on the use of automated decision-making by government is examining areas ‘such as review pathways for those affected by decisions, transparency and accountability mechanisms, and safeguards to ensure the safe and responsible use of ADM’.[54] Public consultation closed on 15 January 2025.[55]

Record keeping

2.52The National Archives of Australia (National Archives) drew attention to its role in the public service’s adoption of AI systems, a role that it observed is often not focused on.[56] In its consideration of the public service’s use of AI, National Archives includes ‘records about the selection, evaluation, use, assurance, and outputs of AI’ and ‘records of the users of AI, and the recipients of services, advice and actions impacted by AI’.[57]

2.53National Archives discussed the importance of effective records management further in its submission, stating that:

Records of government decisions and actions are an essential source of information for effective and responsive service delivery. The duty to document does not cease as government embraces new ways of improving service delivery and interacting with the community, including accessing, implementing and using Artificial Intelligence (AI). For transparent and accountable government, records of decisions, including how and through what technology those decisions were made, need to be made and kept. This will include keeping data on the use of AI and on the data an AI system or application accessed in providing input to a government decision and action.[58]

2.54National Archives emphasised that effective governance, practices, policies and standards are vital to appropriately manage AI and ensure the authenticity of its outputs.[59] National Archives also described its expectations for public service entities, including that they ‘create and retain records that capture all significant advice and decisions that are part of the scoping, planning, execution, finalisation of AIrelated programs and systems and any post-project review relating to use of AI’.[60]

2.55National Archives recommended that public service entities strengthen their information management capabilities:

National Archives every year does a survey of Commonwealth government agencies around their information governance maturity, and one of the challenges that we get back on a regular basis from government agencies is that they are resource poor in the information management area. I think agencies need to look at bolstering information management just as they would be bolstering their technology area and their security area to deal with this.[61]

2.56National Archives also recommended that the Archives Act 1983 be revised ‘so there is no ambiguity that the definition of “records” includes those created with or by AI or other technologies, accompanied by a clearer requirement on agencies to make and keep records in the course of carrying out their activities’.[62] Proposed revisions include changes to how records are defined to ‘align with other national and international archives definitions and reflect our understanding of “record” in the digital age in which we now work’.[63]

Procurement

2.57The procurement of AI systems was an area of focus throughout the inquiry. National Archives described how the technology market is changing, stating that ‘the technology market is applying AI to new and upgraded systems and software’ and ‘the uptake of AI use will increase as vendors embed systems currently used across government with AI enhancements and efficiencies’.[64]

2.58Inquiry participants indicated that this shift in the technology environment may have data security and transparency implications, with resulting impacts on governance and control frameworks. Interactions between providers across the AI supply chain were of particular focus, as will be examined below.

2.59DTA used Copilot as an example of a case where a company provides a product that makes use of an AI model owned by a third party. In this case, Microsoft provides a product that makes use of OpenAI’s GPT model.[65] DTA commented that ‘they have that under a contractual arrangement that is visible to the Commonwealth, but the control functions around it are not’.[66]

2.60DTA further commented that ‘we have to start looking at which providers are getting services from other providers in this chain. We believe that's an area where there could be some risk, because in some of these cases we may not be informed about those downstream uses of AI in solutions’.[67]

2.61DTA advised that it is currently piloting an AI assurance framework that takes a lifecycle approach.[68] In addition, the national framework for the assurance of artificial intelligence in government was agreed to by Commonwealth and state and territory data and digital ministers and released in June 2024. The framework includes a section on procurement providing advice and directing that ‘careful consideration must be applied to procurement documentation and contractual agreements when procuring AI systems or products’.[69]

2.62ANAO commented on the need for public service entities to proactively manage AIrelated procurements and contracts, stating that ‘we have to hold to account those who we are purchasing from. We can't just set and forget, hand things over and hope for the best’.[70]

2.63The Department of Home Affairs (Home Affairs) indicated that there is a growing consensus that there are ‘certain types of use cases for AI for which we should not use third-party systems, where we cannot assure the entire supply chain’.[71] Home Affairs further commented that ‘we are very focused on making sure that the entire data, code, usage, output and even impacts are being understood, monitored and assured, end to end, on anything which is high risk’.[72]

Use of AI in public service recruitment

2.64CPSU raised concerns regarding the use of AI in public service recruitment and promotion processes. It referred to results from its survey of public service workers, in which 85 per cent of respondents were concerned about the use of AI in recruitment. 20 per cent stated they had participated in a public service recruitment process in which AI was used, and over half of the 20 per cent experienced problems with the process.[73]

2.65CPSU advised at the 4 December 2024 public hearing that ‘anecdotally, we are aware of AI programs favouring candidates who use tools such as ChatGPT to improve their applications, and candidates using generative AI to respond to questions in video interviews and other online processes’.[74] CPSU also emphasised that ‘only humans are capable of making sure that the really important merit principle, which is so foundational to APS employment, is upheld’, and called for additional protections to prevent AI from being used to filter out candidates in APS recruitment processes.[75]

2.66The Australian Public Service Commission (APSC) commented on the use of AI in APS recruitment in its submission, stating that:

When using AI as a part of a recruitment process, agencies must be able to demonstrate how the process was consistent with merit, including the effectiveness of assessing candidates for the work-related qualities genuinely required to perform the relevant duties.[76]

2.67APSC further commented that:

It is important that agencies have assurance measures in place to pick up for any biases in AI models and uphold the integrity of processes for vulnerable and culturally and linguistically diverse cohorts of applicants.[77]

2.68APSC advised that the Merit Protection Commissioner has published its Guidance material for using AI-assisted recruitment tools, which ‘includes advice for agencies on considerations when deciding whether to use AI tools for recruitment’.[78]

Public service capability

Strategic guidance

2.69APSC supports the Australian Public Service Commissioner and operates under the Public Governance, Performance and Accountability Act 2013. It also manages the Australian Public Service Academy. APSC publishes information products that provide transparency on aspects of the APS and provide strategic direction to the sector.

2.70In its submission to the inquiry, APSC advised that it has supported APS AI adoption in the following ways:

  • hosting the AI in government fundamentals training module on its APSLearn platform
  • holding Mastercraft sessions on aspects of APS work that are affected by AI
  • developing the Data and Digital Professions to facilitate the development of technical AI skills in the APS
  • raising matters such as AI governance and its impact at forums, including the Secretaries Board and the APS Heads of Professions.[79]

Resourcing

2.71CSIRO identified that certain skills will be required in Australia to effectively adopt AI, including data management skills, the skill to maintain hardware and train AI models, and the skills required to write software to use and apply the AI models.[80]

2.72APSC’s State of the Service Report 2023–24 states that 88 per cent of APS agencies reported skills shortages in 2024.[81] The two top areas of skills shortages were ‘digital and information communications technology (ICT)’ and ‘data’.[82] The APSC specifically measured skills shortages relating to AI in the two top areas for the first time in 2024, with the following results:

  • 37 per cent of entities who identified ‘digital and ICT’ as an area of critical skills shortage identified ‘AI/Machine learning software’ as a specific subset of skills shortage. ‘General digital literacy’ was identified by 49 per cent.[83]
  • 42 per cent of entities who identified ‘data’ as an area of critical skills shortage identified ‘AI/machine learning design and validation’ as a specific subset of skills shortage. ‘General data literacy’ was identified by 58 per cent.[84]
    1. The growing demand for specialist AI skills, and the challenges that may be faced by public service entities in meeting the demand, was raised by some entities who participated in the inquiry. For example, Services Australia advised in its submission to the inquiry that:

For sovereign activities to be undertaken by our workforce, the ability to draw upon an Australian AI skilled workforce will be fundamental. Building human capacity in AI will be critical, as demand is already outstripping supply, and the need for AI specialist skills is expected to grow significantly.[85]

2.74Similarly, DTA advised that the public service will need to increase its supply of AI skills, stating in its submission that:

Jobs requiring specialist AI skills are growing significantly. While AI digital assistant tools will be able to help APS staff by filling other skills gaps, such as writing code or providing specialist content, the skills required to safely build, implement and support these specialised AI tools are in short supply. While there are existing data science skillsets in specialised areas of the APS, the current number of APS employees with both data science and AI skills, or even specialty AI skills alone, appears to be low. To realise the opportunities of AI and meet demand, the APS will need to grow its AI skills supply.[86]

2.75An APS Data, Digital and Cyber Workforce Plan is currently under development, aiming to ‘provide a call to action to attract, develop and retain data, digital, and cyber talent in a unified and strategic way across the APS’.[87]

Training

2.76Many submissions to the inquiry called for increased AI-related training to ensure that staff are equipped with the knowledge they need to effectively and safely use emerging technologies.[88]

2.77In its submission to the inquiry, DTA advised that its Policy for the responsible use of AI in government recommends that all entities implement AI fundamentals training.[89] To assist entities in implementing this recommendation, DTA advised that it has developed a training module and published guidance for staff training on AI.[90] DTA advised, in response to questions on notice, that it will measure the usage of training by ‘monitoring completion rates through APSLearn as well as through engagement with agency Accountable Officials’ and ‘does not intend to develop further AI specific or role-based training at this time’.[91]

2.78Home Affairs noted that training on AI can be complemented by other forms of training, stating that ‘data literacy and AI literacy needs to go hand in hand so we don't get people using AI without a good understanding of the data protection securities, and responsibilities and accountabilities as well’.[92]

2.79As raised earlier in this chapter, the importance of training was highlighted in DTA’s evaluation of the Copilot trial. The evaluation found that ‘75% of participants who received 3 or more forms of training were confident in their ability to use Copilot, 28percentage points higher than those who received one form of training’.[93]

2.80Despite this, 92 per cent of respondents to CPSU’s survey of APS employees stated that they had not received any training on using AI at work. In addition, only 16percent answered that they feel equipped to use AI tools at work.[94]

Footnotes

[1]See submissions 1.1, 5.3, 8.1, 11.1, 15.2, 17.1, 18.1, 20.2, 23.1, 24.3, 27, 28, 30, 31, 35, 36, 37, Inquiry into Commonwealth Financial Statements 2022–23.

[2]See submissions 11.1, 18.1, 24.3, 28, 29, Inquiry into Commonwealth Financial Statements 2022–23.

[3]See submissions 20.2, 24.3, 28, 31, 32, 35, 37, Inquiry into Commonwealth Financial Statements 2022–23.

[4]The Hon Anthony Albanese MP, Prime Minister, ‘Australian Government collaboration with Microsoft on artificial intelligence’, Media Release, 16 November 2023, https://www.pm.gov.au/media/australian-government-collaboration-microsoft-artificial-intelligence.

[5]Digital Transformation Agency (DTA), Evaluation of the whole of government trial of Microsoft 365 Copilot – Appendix A: Background, https://www.digital.gov.au/initiatives/copilot-trial/microsoft-365-copilot-evaluation-report-full/appendix-a, viewed 17 December 2024.

[6]DTA, Submission 9.2, response to questions on notice, question 6, p. [4].

[7]DTA, Evaluation of the whole-of-government trial of Microsoft 365 Copilot, https://www.digital.gov.au/initiatives/copilot-trial/microsoft-365-copilot-evaluation-report-full, viewed 17 December 2024.

[8]DTA, Evaluation of the whole-of-government trial of Microsoft 365 Copilot – Executive Summary, https://www.digital.gov.au/initiatives/copilot-trial/microsoft-365-copilot-evaluation-report-full/executive-summary-glossary, viewed 31 January 2025.

[9]DTA, Evaluation of the whole-of-government trial of Microsoft 365 Copilot -Executive summary and glossary, https://www.digital.gov.au/initiatives/copilot-trial/microsoft-365-copilot-evaluation-report-full/executive-summary-glossary, viewed 17 December 2024.

[10]DTA, Submission 9, p. 15.

[11]DTA, Submission9, p. 16.

[12]Dr Farida Akhtar, Submission 23, p. 4

[13]Ms Rebecca Fawcett, Deputy Secretary, Community and Public Sector Union (CPSU), Committee Hansard, Canberra, 4 December 2024, p. 15.

[14]CPSU, Submission 25: Attachment 1, p. 2.

[15]Australian Taxation Office (ATO), Submission 18, p. 3

[16]Mr Patrick Roberts, Assistant Secretary, Department of Finance, Committee Hansard, Canberra, 15 November 2024, pages 10–11.

[17]Department of Home Affairs (Home Affairs), Submission 32, p. 3.

[18]Home Affairs, Submission 32, p. 3.

[19]Australian National Audit Office (ANAO), Submission 24.3, Inquiry into Commonwealth Financial Statements 2022–23, p. 19.

[20]ANAO, Submission 24.3, Inquiry into Commonwealth Financial Statements 2022–23, p. 19.

[21]Australian Bureau of Statistics (ABS), Submission 2, p. 4.

[22]Department of Agriculture, Fisheries and Forestry (DAFF), Submission 5, p. 5.

[23]Services Australia, Submission 17.1, Inquiry into Commonwealth Financial Statements 2022–23, p. 1.

[24]Commonwealth Scientific and Industrial Research Organisation (CSIRO), Submission 15, p. 5.

[25]Dr Stefan Hajkowicz, Chief Research Consultant, CSIRO, Committee Hansard, Canberra, 15 November 2024, p. 33.

[26]ANAO, Submission 35, p. 1.

[27]Australian Human Rights Commission (AHRC), Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias, https://humanrights.gov.au/our-work/technology-and-human-rights/publications/technical-paper-addressing-algorithmic-bias, viewed 13 February 2025.

[28]DTA, Interim guidance on government use of public generative AI tools, https://architecture.digital.gov.au/guidance-generative-ai, viewed 13 February 2025.

[29]Australian Academy of Science (AAS), Submission 19, p. 1.

[30]Mr Michael Harrison, Chief Information Officer, Attorney-General’s Department (AGD), Committee Hansard, Canberra, 15 November 2024, p. 11.

[31]Ms Lorraine Finlay, Human Rights Commissioner, AHRC, Committee Hansard, Canberra, 4 December 2024, p. 8.

[32]Infosys, Submission 36, p. 2.

[33]Ms Finlay, AHRC, Committee Hansard, Canberra, 4 December 2024, pages 9–10.

[34]DTA, Submission 9, p. 8.

[35]For example, see Accenture, Submission 28, p. 4.

[36]Australian Signals Directorate (ASD), Submission 45, response to questions on notice, question 6, p. [8].

[37]CSIRO, Submission 15, p. 8.

[38]AAS, Submission 19, p. 2.

[39]ASD, Convoluted Layers: An Artificial Intelligence Primer, https://www.cyber.gov.au/resources-business-and-government/governance-and-user-education/artificial-intelligence/an-introduction-to-artificial-intelligence, viewed 24 January 2025.

Also refer to Mr Alan Marjan, First Assistant Director General, ASD, Committee Hansard, Canberra, 15 November 2024, p. 10.

[40]DTA, Submission 9.1, response to questions on notice, question 1, p. 1.

[41]Home Affairs, Submission 32.1, response to questions on notice,question 7, p. 9.

[42]Commonwealth Ombudsman, Automated Decision-making Better Practice Guide, March 2020, p. 5, https://www.ombudsman.gov.au/__data/assets/pdf_file/0029/288236/OMB1188-Automated-Decision-Making-Report_Final-A1898885.pdf, viewed 14 January 2025.

[43]Commonwealth Ombudsman, Automated Decision-making Better Practice Guide, p. 5.

[44]Australian Research Council Centre of Excellence for Automated DecisionMaking and Society (ADM+S), Submission 22, p. 6.

[45]ADM+S, Submission 22, p. 20.

[46]For example, see CPSU, Submission 25, p. 3; NSW Council for Civil Liberties, Submission 20, p. 7; ANU Law Reform and Social Justice Research Hub (LRSJ), Submission 31, p. [8].

[47]Commonwealth Ombudsman, Submission 24, p. 3.

[48]NSW Council for Civil Liberties, Submission 20, p. 20.

[49]ATO, Submission 18, p. 4.

[50]CPSU, Submission 25, p. 4.

[51]Ms Finlay, AHRC, Committee Hansard, Canberra, 4 December 2024, p. 6.

[52]Commonwealth Ombudsman, Submission 24, p. 3.

[53]AGD, Submission 44, p. [5].

[54]Ms Dianne Orr, Acting First Assistant Secretary, AGD, Committee Hansard, Canberra, 15 November 2024, pp. 5–6.

[55]AGD, Automated Decision-Making Reform, https://consultations.ag.gov.au/integrity/adm/, viewed 14 January 2025.

[56]Mr Simon Froude, Director-General, National Archives of Australia (National Archives), Committee Hansard, Canberra, 4 December 2024, p. 3.

[57]National Archives, Submission 33, p. 4.

[58]National Archives, Submission 33, p. 1.

[59]Mr Froude, National Archives, Committee Hansard, Canberra, 4 December 2024, p. 3.

[60]National Archives, Submission 33, p. 1.

[61]Mr Froude, National Archives, Committee Hansard, Canberra, 4 December 2024, p. 4.

[62]National Archives, Submission 33,p. 2.

[63]National Archives, Submission 33, p. 2.

[64]National Archives, Submission 33, p. 2.

[65]Mr Chris Fechner, Chief Executive Officer, DTA, Committee Hansard, Canberra, 15 November 2024, p. 17.

[66]Mr Fechner, DTA, Committee Hansard, Canberra, 15 November 2024, p. 17.

[67]Mr Fechner, DTA, Committee Hansard, Canberra, 15 November 2024, p. 17.

[68]Mr Fechner, DTA, Committee Hansard, Canberra, 15 November 2024, p. 18.

[69]Australian Government, National framework for the assurance of artificial intelligence in government, p. 10, https://www.finance.gov.au/sites/default/files/2024-06/National-framework-for-the-assurance-of-AI-in-government.pdf, viewed 22 January 2025.

[70]Ms Rona Mellor PSM, Deputy Auditor-General, ANAO, Committee Hansard, Canberra, 15 November 2024, p. 14.

[71]Ms Andrews, Home Affairs, Committee Hansard, Canberra, 15 November 2024, p. 4.

[72]Ms Andrews, Home Affairs, Committee Hansard, Canberra, 15 November 2024, p. 4.

[73]CPSU, Submission 25, p. 3.

[74]Ms Fawcett, CPSU, Committee Hansard, Canberra, 4 December 2024, p. 12.

[75]Ms Fawcett, CPSU, Committee Hansard, Canberra, 4 December 2024, p. 14.

[76]APSC, Submission 43, p. 2.

[77]APSC, Submission 43, p. 2.

[78]APSC, Submission 43, p. 2. See also: Merit Protection Commissioner, Guidance material for using AI-assisted recruitment tools, https://www.mpc.gov.au/resources/guidance/myth-busting-ai-assisted-and-automated-recruitment-tools, viewed 15 January 2025.

[79]Australian Public Service Commission (APSC), Submission 43, pages 1–2.

[80]Professor Elanor Huntington, Executive Director, CSIRO, Committee Hansard, Canberra, 15 November 2024, p. 34

[81]APSC, State of the Service Report 2023–24, p. 77.

[82]APSC, State of the Service Report 2023–24, p. 335.

[83]APSC, State of the Service Report 2023–24, p. 337.

[84]APSC, State of the Service Report 2023–24, p. 336.

[85]Services Australia, Submission 1, p. 2.

[86]DTA, Submission 9, p. 6.

[87]Australian Government, APS Data, Digital and Cyber Workforce Plan, https://www.dataanddigital.gov.au/plan/mission-initiatives/data-and-digital-foundations/aps-data-digital-and-cyber-workforce-plan, viewed 17 January 2025.

[88]For example, refer to Infosys, Submission 36, p. [4]; CPSU, Submission 25, p. 25; Australasian Institute of Digital Health, Submission 4, p. 5; Australian Federal Police, Submission 14, p. 1.

[89]DTA, Submission 9, p. 6.

[90]DTA, Submission 9, p. 6. The guidance is available at: DTA, Guidance for staff training on AI, https://www.digital.gov.au/policy/ai/staff-training, version 1.1, viewed 12 December 2024.

[91]DTA, Submission 9.2, response to questions on notice, question 7a and 7b, p. 4.

[92]Ms Andrews, Home Affairs, Committee Hansard, Canberra, 15 November 2024, p. 12.

[93]DTA, Evaluation of the whole of government trial of Microsoft 365 Copilot – Evaluation findings, methodology and approach, https://www.digital.gov.au/initiatives/copilot-trial/summary-evaluation-findings/cts-evaluation-findings, viewed 17 December 2024.

[94]CPSU, Submission 25: Attachment 1, p. 2.