- Policy and regulatory environment
- This chapter examines the evidence received by the inquiry regarding the policy environment surrounding the use of Artificial Intelligence (AI) by the Australian public service. The following areas are examined:
- roles and responsibilities
- policies and regulatory frameworks
- approaches proposed by inquiry participants.
- Policy is defined in the Australian Public Service Commission’s (APSC) Delivering Great Policy guide as ‘the basic agreed principles by which government is guided’. The former Australian Government Guide to Regulatory Impact Analysis defines regulation as ‘any rule endorsed by government where there is an expectation of compliance’.
Roles and responsibilities
3.3The Committee received evidence through submissions and public hearings indicating that Australian Government AI policy is currently being led by two entities:
- the Digital Transformation Agency (DTA)
- the Department of Industry, Science and Resources (DISR).
- Items led by DTA that will be discussed in this chapter include:
- the AI in Government Taskforce, which involved secondees from 11 APS entities and contributed to DTA’s AI work
- the Interim guidance on government use of public generative AI tools, which establishes two ‘golden rules’ for APS entities to follow when using generative AI
- the Policy for the responsible use of AI in government, which is mandatory for non-corporate Commonwealth entities and establishes two requirements relating to accountability and transparency
- the draft AI assurance framework, which is intended to provide a mechanism for entities to assess AI risks and document how they will use AI responsibly.
- Items led by DISR that will be discussed in this chapter include:
- the AI Expert Group, which advises DISR on areas such as transparency, testing and accountability
- Australia’s AI Ethics Principles, which are voluntary and contain eight elements
- mandatory guardrails for AI in high-risk settings, which are currently under consultation and are intended to ensure that AI used in high-risk settings is tested, transparent, and supported by accountability mechanisms.
Department of Industry, Science and Resources
3.6The purpose of DISR, as stated in its Corporate Plan 2024-28, is ‘building a better future for all Australians through enabling a productive, resilient and sustainable economy, enriched by science and technology’.
3.7DISR indicated at the public hearing on 15 November 2024 that ‘the Minister for Industry and Science is responsible for the whole-of-economy approach to the safe and responsible use of artificial intelligence, taking very much a risk based approach’.
3.8The development of mandatory guardrails for AI in high-risk settings is the key whole‑of‑economy AI project DISR is leading, and will be discussed later in this chapter. DISR advised in its submission that it is working with stakeholders in industry, academia and civil society. Examples of its industry-focused work include the National Reconstruction Fund, the AI Adopt Program, the Next Generation Graduates Program and the AI Sprint.
Digital Transformation Agency
3.9DTA is an Executive Agency that operates within the Finance Portfolio. DTA’s mission, as articulated in its Corporate Plan 2024–25, is to ‘provide strategic and policy leadership, expert investment advice and oversight to drive digital transformation that delivers benefits to all Australians’.
3.10DTA advised that the agency’s role is ‘specifically about digital application within government, and the way digital application within government applies to citizens and businesses’.
3.11DTA provided the following representation of the policies and frameworks that fall within its remit (Figure 3.1).
Figure 3.1DTA artificial intelligence policies and frameworks

Source: DTA, Submission 9.2, p. 2.
Working groups
3.12Australian Government entities have established several working groups to provide further advice and conduct additional research into AI.
AI in Government Taskforce
3.13The AI in Government Taskforce was established in September 2023. It comprised 18 secondees from 11 APS entities. The taskforce was initially announced to operate for up to six months and concluded on 30 June 2024.
3.14DTA provided the following advice on the outcomes of the taskforce:
The AI in Government Taskforce contributed to many ongoing initiatives, including:
- updating the Guidance for agency use of public generative AI tools
- supporting the DTA on the design phase and initial implementation of the Microsoft 365 Copilot trial
- supporting the development of the Policy for the responsible use of AI in government
- a syllabus for the AI fundamentals training module
- a whole-of-government survey on the use of automated decision-making (ADM)
- drafting the pilot Australian Government AI assurance framework.
AI Expert Group
3.15The AI Expert Group comprises 12 appointees from the industry, academic and legal fields and it advises DISR. DISR provided more information on the group in response to a question on notice and stated that:
The temporary AI Expert Group was established in January 2024. The Group has 12 members with expertise across the fields of law, ethics, technology, industry and academia. The Group was established to provide advice on immediate work on transparency, testing and accountability including options for mandatory guardrails for AI in high-risk settings.
Current policy framework
3.16This section describes policies that have been established since 2019 that influence how AI is used in the Australian public service. Activity in this area has been accelerating, and other frameworks are currently under development. These are examined later in this chapter.
3.17Australia does not currently have an AI Act or similar sector-wide legislative frameworks in place. Legislation is one of the options under consideration in the Australian Government’s Introducing mandatory guardrails for AI in high-risk settings: proposals paper.
Australia’s AI ethics principles
3.18Australia’s AI Ethics Principles, first published in November 2019, are administered by DISR. These voluntary principles contain eight elements:
- human, social and environmental wellbeing
- human-centred values
- fairness
- privacy protection and security
- reliability and safety
- transparency and explainability
- contestability
- accountability.
- The Australian Research Council Centre of Excellence for Automated Decision Making and Society (ADM+S) raised concerns that the principles were developed prior to the widespread availability of generative AI and had not been reviewed as at September 2024.
Interim guidance on government use of public generative AI tools
3.20DTA’s Interim guidance on government use of public generative AI tools was last updated in November 2023 and establishes two ‘golden rules’:
1You should be able to explain, justify and take ownership of your advice and decisions.
2Assume any information you input into public generative AI toolscould become public. Don't input anything that could reveal classified, personal or otherwise sensitive information.
3.21The guidance refers to existing frameworks, including the APS Values and Australia’s AI Ethics Principles, and includes sample use cases.
Policy for the responsible use of AI in government
3.22DTA’s Policy for the responsible use of AI in government took effect on 1September2024. It is mandatory for non-corporate Commonwealth entities. The policy requires that entities:
- designate accountability for implementing it to accountable officials within 90 days of the policy taking effect
- make publicly available a statement outlining their approach to AI adoption and use (a transparency statement) within six months of the policy taking effect and review and update the statement annually or sooner.
- Many APS entities who contributed to the inquiry stated that they had commenced work on implementing the policy, including by designating accountable officials and preparing transparency statements to publish by the due date.
- While the policy is directed at non-corporate Commonwealth entities, the CSIRO (a corporate Commonwealth entity) advised that it sees it as good practice to adhere to the policy and will consider if any additional safeguards are required.
- Inquiry participants commented on aspects of the policy, including suggested areas of improvement relating to transparency and alignment with existing frameworks.
- In its submission, the Australian National University Law Reform and Social Justice Research Hub (LRSJ) commented that the policy ‘is a good start to providing more transparency around the purposes for which AI is used by public service entities. However it does not require departments and agencies to disclose specifically which decisions are made by AI beyond a classification of usage patterns and domains’.
- ADM+S stated in its submission that the policy ‘appears to be a step forward’ but further commented that ‘the Policy introduces a new three-part language framework that is not aligned with any of the Australia’s AI Ethics Principles, the National Framework or the proposed Mandatory Guardrails’ and ‘the policy is extraordinarily limited in what it requires’.
Entity-specific approaches
3.28As indicated in chapter 1, the Committee sought additional information on public service entities’ use of emerging technologies as part of the Inquiry into Commonwealth Financial Statements 2022–23. 14 questions were sent to entities and responses were received from 40 entities. A summary of trends from the responses relating to entity-specific governance arrangements is included below.
- The entities have put in place different governance arrangements for AI. The majority have briefed their Audit and Risk Committees on AI use and development. Additional mechanisms that have been implemented include AI steering groups and registers of AI use.
- While some entities appear to be considering the risks posed by AI when developing internal audit work programs, none of the 40 entities had commissioned an internal audit on controls for AI or the adoption of AI.
- The Department of the Prime Minister and Cabinet (PM&C) provided a copy of its internal Artificial Intelligence Policy, dated October 2024, to the inquiry for reference. The policy establishes a process for assessing and approving AI products for use in the department, and advises staff on matters to consider. The policy also sets up governance and oversight arrangements.
- ANAO raised the importance of public service entities establishing effective controls while developing and implementing their own internal policies or principles for AI systems, stating at the public hearing on 15 November 2024 that:
… we've seen a lot of control weaknesses in the sector. My lesson would be—as you deploy these tools that have so much benefit for productivity and opportunity for service delivery—don't deploy them unless you've got those control frameworks right. Even simple things, like ASD saying it has its principles. We would ask: does the entity know where all of these different tools are in use in the organisation while the thousand flowers bloom and great ideas emerge? How are you going to control what's actually going on in the entity and understand whether what you're doing is well controlled? That would be our watch point.
Current regulatory framework
3.31The use of AI by public service entities is not currently subject to regulatory arrangements specific to the technology. Instead, existing legislation that is format and technology neutral applies. Examples of this legislation include the ArchivesAct1983 and the PrivacyAct1988. DTA advised that the Policy for the responsible use of AI in government ‘is designed to strengthen, not duplicate, these existing policies and frameworks’.
3.32A number of inquiry participants raised concerns that the current arrangements do not allow for effective investigation, enforcement and direction. For example, ADM+S commented that:
… a possible explanation for the current weakness of Commonwealth policy on public sector AI use is the absence of any entity within the Commonwealth Government with the power to impose stronger requirements around the use of ADM/AI, and/or the absence of willingness to use that power. The Digital Transformation Agency is an advisory body: it doesn’t have the power to direct people how to use AI. Oversight mechanisms such as the Commonwealth Ombudsman have power to investigate and review, but not the power to direct agencies in their approach to the use of AI.
3.33LRSJ made similar comments, observing that
Governance measures encourage considerations of ethics and responsible use but lack regulatory measures aimed at mitigating and minimising adverse outcomes, particularly in regards to vulnerable members of the public.
3.34While policies such as the Policy for the responsible use of AI in government place requirements on entities, these are expected to be self-regulated and enforcement mechanisms have not been established. In its submission, ANAO identified areas of concern regarding a self-regulated approach:
A key consideration which will arise for the public sector will be in relation to how assurance is obtained over the operating effectiveness and adherence to the policy within entities, which will be largely self regulated. The ANAO has previously drawn attention to weaknesses in the process for self-regulation, particularly in respect of compliance with the Australian Government’s Protective Security Policy Framework (PSPF) relating to cyber security, including that there may be an optimism bias in an entity’s assessment of compliance with the framework. Policy owners should be cognisant of these risks when determining how assurance is obtained about whole-of-sector compliance or adherence to policy.
3.35ANAO further commented at the public hearing on 15 November 2024 that:
There's a real opportunity here as we start to embed these frameworks: how does the Parliament get the assurance it needs, through the whole system, rather than individual senators or members having to seek that one on one on one? A reporting framework that's got a touch of assurance in it, that makes the self-reporting honest, verifiable and able to be accessed by another domain of our democracy, which is the parliament.
Work underway
Pilot AI assurance framework
3.36DTA has developed a draft AI assurance framework, which was piloted by Australian Government entities from September 2024 to November 2024. The framework involves entities completing an initial threshold assessment for AI use cases. It is intended to provide a mechanism for entities to assess AI risks and document how they are ensuring they will use AI responsibly.
Mandatory guardrails for AI in high-risk settings
3.37In September2024, the Australian Government published the Introducing mandatory guardrails for AI in high-risk settings: proposals paper. Consultation on the proposed guardrails closed on 4October2024.
3.38The proposed guardrails would establish the following requirements for entities developing or deploying high-risk AI systems:
- establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance
- establish and implement a risk management process to identify and mitigate risks
- protect AI systems, and implement data governance measures to manage data quality and provenance
- test AI models and systems to evaluate model performance and monitor the system once deployed
- enable human control or intervention in an AI system to achieve meaningful human oversight
- inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
- establish processes for people impacted by AI systems to challenge use or outcomes
- be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
- keep and maintain records to allow third parties to assess compliance with guardrails
- undertake conformity assessments to demonstrate and certify compliance with the guardrails.
- DISR provided further context in response to a question on notice from the Committee on why the proposed guardrails are needed:
Characteristics of AI like speed, scale and autonomy limit the ability of existing laws and regulatory frameworks to effectively prevent or mitigate risks. The guardrails outlined in the proposals paper focus on ensuring AI systems being developed and used by organisations in high-risk settings are being tested, are transparent and that there is clear accountability when things go wrong.
3.40DISR linked the development of guardrails to the ongoing discussion of trust in AI, advising that ‘we need to make sure that there is a baseline and expectation of safety so that we can continue, because at the moment trust is still low’.
3.41The proposals paper presents three potential regulatory models that could be used to enforce the mandatory guardrails:
- Option 1: a domain specific approach – adapting existing regulatory frameworks to include the guardrails
- Option 2: a framework approach – introducing framework legislation that will require other existing laws to be amended for the framework legislation to have effect
- Option 3: a whole of economy approach – introducing a new cross‑economy AI Act.
- Responses to the proposals paper have been published on DISR’s website. DISR received ‘around 300 submissions’ and provided early observations on the findings at the 15 November 2024 public hearing, stating that:
84 per cent of respondents within industry/civil society preferred either option 2 or option 3 on mandatory guardrails. That included industry. It was a recognition that more needed to be done to set that benchmark to ensure that we've got the right safety mechanisms in place.
Proposals by inquiry participants
Establishment of an overarching body
3.43One regulatory option called for by inquiry participants was the establishment of an overarching body with oversight of the public service’s use of AI. The proposed nature of this body or position differed among inquiry participants.
3.44The Australian Human Rights Commission (AHRC) called for the establishment of an independent statutory AI commissioner in its submission and in the 2021Human Rights and Technology report. AHRC provided evidence to this inquiry in support of the establishment of an AI commissioner, stating:
What an AI commissioner would potentially be able to do is to help balance out the need to ensure that human rights principles are front and centre while also recognising the innovation, productivity gains and efficiency that can be gained through AI. It's taking a very practical but principled approach to ensuring that we maintain those human rights that need to be protected without allowing that to mean that we just don't adopt the technology at all.
3.45AHRC further commented that, in addition to providing expert advice on how to comply with laws and ethical standards that apply to the development and use of AI, the AI commissioner could play a key role in building the capacity of existing agencies and departments to adapt and respond to the rise of AI. In addition, they could ‘build capacity and act as a bridge between the private sector and government in ensuring that Australia is able to make the best use of AI and is able to achieve the benefits of innovation in this space’.
3.46A similar recommendation was made by the NSW Council for Civil Liberties, which called for the establishment of an AI Safety Commissioner to ‘oversee AI regulation, conduct audits, and enforce compliance’.
3.47Another option put forward by inquiry participants was a separate regulatory body. LRSJ proposed that such a body could monitor AI development and use, develop laws on AI regulation, and investigate unethical practice.
3.48Automated decision-making, an area of concern raised by inquiry participants, was also the subject of governance-related recommendations. LRSJ recommended, in line with findings of the Royal Commission into the Robodebt Scheme, that an independent oversight entity be created to review automated decision-making in the public service.
3.49The Commonwealth Ombudsman proposed the Administrative Review Council as a body that could take on some of the review functions for automated decision-making described above. The Ombudsman advised that, subject to the views of members, the Administrative Review Council could:
- inquire into the availability, accessibility and effectiveness of review of administrative decisions made by AI
- support education and training of government officials in relation to the making of administrative decisions using AI.
Establishment of new policies or legislation
3.50Another common recommendation by inquiry participants was for new policies or legislation regarding AI to be established. This recommendation took several forms.
3.51The NSW Council for Civil Liberties observed that ‘existing laws are fragmented and do not adequately address the risks and challenges posed by public sector use of AI’ and called for ‘bespoke AI legislation that adopts a risk-based approach, with clear and proportionate obligations on entities that develop, deploy, or use AI’.
3.52LRSJ recommended that laws be established to regulate AI as part of a broader recommendation for regulatory body to be created to monitor how APS entities develop and use AI systems. LRSJ further commented that ‘without clear consistent guidelines with mandatory and binding legal effects on agencies, there is risk of adverse outcomes’.
3.53The Tech Council of Australia called for the development of a National AI Strategy in its submission and advised that such a strategy would ‘support whole-of-economy uplift of AI adoption, investment and capability, with an emphasis on the public sector as an exemplar’. The Tech Council of Australia also called for mechanisms to be established for high-risk use cases, commenting that:
As part of the National AI Strategy, an overarching legislated framework would address this in a more effective and usable manner than the multiplicity of mechanisms that currently apply to the public sector.
3.54Similarly, ADM+S commented that:
The number of often slightly different guidelines, recommendations, frameworks, and statements is overwhelming. A common baseline — and one markedly stronger than the current Commonwealth policy — is needed.