Ethical AI and the Australian public sector

AI Public Sector Government Regulation Ethics
Dr Antonino Nielfi

Australia’s AI Month commences on 15 November amidst ongoing debate about a national AI strategyregulatory framework, and a AI Commission, mirroring similar US and EU initiatives. Australia is committing to implementing Responsible AI in industry, the military and the public sector and signed the 2023 Bletchley Declaration on AI Safety. In September 2023 the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR) established the Artificial Intelligence in Government Taskforce (AIGT). It aims to develop an integrated regulatory framework for safe, ethical, and responsible AI use across the Australian Public Service (APS). This Flagpost discusses the principles informing ethical AI and current related Australian government initiatives.

What is AI?

DISR has defined AI as:

an engineered system that generates predictive outputs such as content, forecasts, recommendations, or decisions for a given set of human-defined objectives or parameters without explicit programming (p 5).

The Chief Scientist’s Rapid Response Information Report: Generative AI, released in March 2023, distinguished between ‘conventional AI’ and ‘generative AI’ (p 2). Generative AI is capable of categorising or identifying features of input and generate novel content ‘in response to a user prompt’ as either text (Large Language Models) or other media (Multimodal Foundation Models). Alternatively, Automated Decision Making (ADM) uses conventional AI to assist humans in deciding certain outcomes.

What is ethical AI?

AI’s high level of automation and large datasets may create risks for users, due to potentially flawed information processing and/or lack of data processing oversight (‘AI opaqueness’). Such outcomes could include disinformation, data poisoning, AI bias, and privacy breaches when training AI on customer data.

To mitigate against these risks, CSIRO’s Artificial Intelligence: Australia’s Ethics Framework (2019) introduced the concepts of ‘ethical’ and ‘responsible’ AI and 8 core principles:

  1. AI must generate net-benefits
  2. do no-harm,
  3. comply with regulations and legislation,
  4. protect privacy,
  5. be fair,
  6. be transparent and explainable,
  7. be contestable and
  8. be accountable.

These mirror the OECD’s AI Principles from 2019 and the Statement on artificial intelligence, robotics and autonomous systems released by the European Group on Ethics in Science and New Technologies (EGE) in 2018.

In 2023, the IBM Centre for the Business of Government published 6 recommendations for government-implemented AI in Pathways to Trusted Progress with Artificial Intelligence. The paper (pp 6-7) suggested governments should:

  1. Promote AI-human collaboration when appropriate,
  2. Focus on justifiability,
  3. Insist on explainability,
  4. Build in contestability,
  5. Build in safety, and
  6. Ensure stability.

Ethical AI in the Australian public sector

The Australian Government’s 2023 Data and Digital Strategy notes that public agencies employ AI technologies ‘to predict service needs, gain efficiencies in agency operations, support evidence-based decisions and improve user experience’ (p 9). The APS reportedly uses several AI systems and ADMs, with the Australian Taxation Office setting the pace in AI adoption. Examples include:

  • chatbots and virtual assistants in online shopfronts,
  • document and image recognition in border and fraud control activities,
  • free-text recognition and translation software across agencies, and
  • entitlement calculation within social services.

The AIGT seeks to develop a whole-of-government governance, risk mitigation and capability enhancement program to ensure the APS implements AI safely and responsibly. Building on the AI Ethics Framework, the AIGT outlined 4 key principles:

  1. AI should be deployed responsibly in low-risk situations.
  2. Transparency and explainability: tell when AI is used and why its use was warranted.
  3. Privacy protection and security: use only public information.
  4. Accountability and human centred decision-making: final word should be with a human.

This approach aligns with guidelines developed by government entities and research centres for the safe and responsible AI use in the APS, emphasising risk-mitigation and trustworthiness:

Why ethical AI matters

As recent research shows, most Australians still distrust AI, which they perceive as unsafe and error prone. The 2023 UK AI Safety Summit at Bletchley Park discussed ‘erosion of social trust’ as one of the long-term ‘existential-level threats’ of unregulated AI. While AI advocates emphasise the benefits of increased productivity, faster service delivery and reduced costs, adopting safe, transparent, and accountable AI makes good ‘public business’ sense.

First, it fosters public trust and public values in AI, which drive the successful delivery and uptake of AI-powered public services. Second, ethically-guided AI governance mitigates ‘legal and reputational risk’ to public agencies. Third, and more broadly, ethical AI is ‘both the necessary, and perhaps defining, feature of AI in Australia’ through which the country can position itself on the digital market as an ‘ideal test bed for new developments and an ethical  AI strategy’.