Australia’s AI Month commences on 15 November amidst ongoing debate about a national AI strategy, regulatory framework, and a AI Commission, mirroring similar US and EU initiatives. Australia is committing to implementing Responsible AI in industry, the military and the public sector and signed the 2023 Bletchley Declaration on AI Safety. In September 2023 the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR) established the Artificial Intelligence in Government Taskforce (AIGT). It aims to develop an integrated regulatory framework for safe, ethical, and responsible AI use across the Australian Public Service (APS). This Flagpost discusses the principles informing ethical AI and current related Australian government initiatives.
What is AI?
DISR has defined AI as:
an engineered system that generates predictive outputs such as content, forecasts, recommendations, or decisions for a given set of human-defined objectives or parameters without explicit programming (p 5).
The Chief Scientist’s Rapid Response Information Report: Generative AI, released in March 2023, distinguished between ‘conventional AI’ and ‘generative AI’ (p 2). Generative AI is capable of categorising or identifying features of input and generate novel content ‘in response to a user prompt’ as either text (Large Language Models) or other media (Multimodal Foundation Models). Alternatively, Automated Decision Making (ADM) uses conventional AI to assist humans in deciding certain outcomes.
What is ethical AI?
AI’s high level of automation and large datasets may create risks for users, due to potentially flawed information processing and/or lack of data processing oversight (‘AI opaqueness’). Such outcomes could include disinformation, data poisoning, AI bias, and privacy breaches when training AI on customer data.
To mitigate against these risks, CSIRO’s Artificial Intelligence: Australia’s Ethics Framework (2019) introduced the concepts of ‘ethical’ and ‘responsible’ AI and 8 core principles:
- AI must generate net-benefits
- do no-harm,
- comply with regulations and legislation,
- protect privacy,
- be fair,
- be transparent and explainable,
- be contestable and
- be accountable.
These mirror the OECD’s AI Principles from 2019 and the Statement on artificial intelligence, robotics and autonomous systems released by the European Group on Ethics in Science and New Technologies (EGE) in 2018.
In 2023, the IBM Centre for the Business of Government published 6 recommendations for government-implemented AI in Pathways to Trusted Progress with Artificial Intelligence. The paper (pp 6-7) suggested governments should:
- Promote AI-human collaboration when appropriate,
- Focus on justifiability,
- Insist on explainability,
- Build in contestability,
- Build in safety, and
- Ensure stability.
Ethical AI in the Australian public sector
The Australian Government’s 2023 Data and Digital Strategy notes that public agencies employ AI technologies ‘to predict service needs, gain efficiencies in agency operations, support evidence-based decisions and improve user experience’ (p 9). The APS reportedly uses several AI systems and ADMs, with the Australian Taxation Office setting the pace in AI adoption. Examples include:
- chatbots and virtual assistants in online shopfronts,
- document and image recognition in border and fraud control activities,
- free-text recognition and translation software across agencies, and
- entitlement calculation within social services.
The AIGT seeks to develop a whole-of-government governance, risk mitigation and capability enhancement program to ensure the APS implements AI safely and responsibly. Building on the AI Ethics Framework, the AIGT outlined 4 key principles:
- AI should be deployed responsibly in low-risk situations.
- Transparency and explainability: tell when AI is used and why its use was warranted.
- Privacy protection and security: use only public information.
- Accountability and human centred decision-making: final word should be with a human.
This approach aligns with guidelines developed by government entities and research centres for the safe and responsible AI use in the APS, emphasising risk-mitigation and trustworthiness:
- In 2019, the Commonwealth Ombudsman updated the Automated Decision-making Better Practice Guide, advising service delivery agencies on safe and transparent ADM implementation.
- In August 2020, the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+s) commenced work to ‘support the development of responsible, ethical and inclusive automated decision-making’.
- In 2021, the IBM Centre for the Business of Government and Queensland University of Technology published Artificial Intelligence in the Public Sector: A Maturity Model, a blueprint for AI adoption in public agencies.
- In March 2021, the Australian Human Rights Commission released its Human Rights and Technology Final Report, which proposed ‘modernising a number of laws, government policies, education and resourcing’ to capture AI’s capabilities and mitigate its risks (p [3]).
- In May 2023, DISR released a discussion paper, Safe and responsible AI in Australia, which addressed risk-based regulatory approaches to AI.
- In July 2023, the DTA released Interim guidance on generative AI for Government agencies, Interim guidance for Australian Public Service (APS) staff and a policy draft on the Australian Government Architecture website on the safe, ethical and transparent use of generative AI.
- In September 2023, the Albanese government announced proposed amendments to the Privacy Act to safeguard ADM-impacted personal data.
- In October 2023, the Department of the Prime Minister and Cabinet released an insights briefing, How might artificial intelligence affect the trustworthiness of public service delivery?
Why ethical AI matters
As recent research shows, most Australians still distrust AI, which they perceive as unsafe and error prone. The 2023 UK AI Safety Summit at Bletchley Park discussed ‘erosion of social trust’ as one of the long-term ‘existential-level threats’ of unregulated AI. While AI advocates emphasise the benefits of increased productivity, faster service delivery and reduced costs, adopting safe, transparent, and accountable AI makes good ‘public business’ sense.
First, it fosters public trust and public values in AI, which drive the successful delivery and uptake of AI-powered public services. Second, ethically-guided AI governance mitigates ‘legal and reputational risk’ to public agencies. Third, and more broadly, ethical AI is ‘both the necessary, and perhaps defining, feature of AI in Australia’ through which the country can position itself on the digital market as an ‘ideal test bed for new developments and an ethical AI strategy’.