Chapter 5Automated decision-making
1.1This chapter considers automated decision-making (ADM).
1.2As outlined in Chapter 1, ADM describes the use of computer systems to automate all or part of an administrative decision-making process. This can include using ADM to:
make a decision;
make an interim assessment or decision leading up to the final decision;
recommend a decision to a human decision-maker;
guide a human decision-maker through relevant facts, legislation or policy; and
automate aspects of the fact-finding process which may influence an interim decision or the final decision.
1.3The benefits of ADM include ‘the potential to increase the efficiency, accuracy and consistency of decisions’, freeing up time which would otherwise be spent on administrative tasks. Submissions received by the inquiry provided a number of examples of how ADM is currently used including, for example, by governments for the allocation of government services, by medical practitioners to assist in diagnostics, and in recruitment to assist with the evaluation and selection of candidates.
1.4This chapter considers:
the risks around the use of Artificial Intelligence (AI) in connection with ADM in relation to bias and discrimination; transparency; and accountability; and
the views of inquiry participants on approaches to regulating ADM, and the Australian Government’s current policy initiatives relevant to AI in the context of ADM.
Background
1.5ADM has existed since before the advent of artificial intelligence (AI). The Commonwealth Ombudsman Automated Decision-Making Better Practice Guide notes that ADM can range from ‘traditional rules-based systems…[such as] a system which calculates a rate of payment in accordance with a formula set out in legislation…[to] more specialised systems which use automated tools to predict and deliberate, including through the use of machine learning’.
1.6While ADM systems may therefore not necessarily involve the use of AI, ADM and AI share a number of policy considerations and concerns including risks in relation to bias, discrimination and error; transparency; and accountability.
1.7With the potential for AI-driven ADM systems increasing as AI becomes more capable and prevalent, one of the challenges for governments in Australia and worldwide is to ensure that regulation of AI addresses the risks of AI generally as well as specifically in connection with its use for ADM.
1.8In findings that are likely to be broadly applicable to all Australian governments, a 2024 report on the use of ADM by the NSW government found that ADM is used across ‘every NSW state government portfolio’; and that, while there is currently limited use of generative AI, there is ‘considerable interest’ in its potential for use by government, including by ‘incorporating [AI’s] predictive analytics into existing structured decision-making processes’.
Bias and discrimination
1.9Many inquiry participants commented on the potential for bias in ADM systems to produce discriminatory and unfair outcomes. Mrs Lorraine Finlay, the Human Rights Commissioner, Australian Human Rights Commission, noted that, ‘despite the perception that AI based decision-making is inherently objective and free from human prejudice and error’, AI technology has the potential to replicate human error, often at large scale.’ The Deakin Law School submission also commented on the capacity of AI systems, including ADM, to replicate discriminatory outcomes at larger scales:
AI is pervasive in nature as it allows algorithms to be applied to large groups of similarly-situated applicants…[meaning that] human rights violations that may ordinarily be limited to a small cohort of affected individuals tend to be amplified and become systemic in nature.
1.10The ARC Centre of Excellence on Automated Decision-Making and Society (ARC Centre) explained that the risk of AI bias is ‘well known’ and arises from biases in the data used to train AI models:
AI draws inferences from patterns in existing data. When biases are embedded in the data used to train models, models tend to perpetuate those biases...
1.11The submission of Associate Professor Alysia Blackman advised that biases in the data used to train AI can arise due to issues with the data, including ‘poor quality or inappropriate data’, ‘out-of-date data’, and data that ‘over- or under-represents certain groups’. Bias can also arise from the design of the algorithm used by an AI system, or in the way that the algorithm is applied to a task.
1.12Many inquiry participants identified concerns about the potential for bias in AI systems to lead to discriminatory outcomes. In the context of ADM, the automation of administrative decision-making can impact large numbers of people from vulnerable and disadvantaged groups. The Queensland Nurses and Midwives Union, for example, noted that, in the context of medical decision-making, bias in ADM systems arising from unrepresentative datasets has ‘the potential to unintentionally reinforce and amplify existing societal disadvantage’, including racial and gender biases:
…[AI] models trained on aggregated data may result in homogenised outputs that fail to consider biological, cultural, religious, and other differences and exacerbate inequities already experienced by marginalised...For example, AI healthcare models that lack sufficient data on Aboriginal and Torres Strait Islander populations may misdiagnose patients or suggest inappropriate care options...
1.13A number of submissions provided examples of AI bias in connection with ADM in the employment and recruitment sphere. Associate Professor AlysiaBlackman, for example, cited the example of an Amazon recruitment tool designed to review job applicants’ resumes to determine which applicants were most likely to be successful recruits. Due to the over-representation of men in the resumes on which it was trained, the tool was found to systematically discriminate ‘against women applicants for software development and technical jobs’, due to the tool having ‘learnt’ that male applicants were to be preferred.
1.14The submission of the Shop, Distributive and Allied Employees’ Association (SDA) outlined concerns about the use of ADM in relation to rostering. While such systems may be intended to operate objectively and ‘avoid issues of preferential treatment by managers’, the SDA noted that the system impacted more harshly on workers with caring responsibilities and casual and part-time workers.’ Accordingly, the SDA recommended that such systems be required to comply with the relevant industrial instruments ‘governing rostering, such as Awards and Enterprise Agreements, and not result in discriminatory outcomes’.
1.15The Victorian Trades Hall Council submission recommended that the government ‘ban the use of [ADM]…in industrial relations and employment law matters,’ including ‘a total ban on using artificial intelligence to hire, fire, discipline or promote workers’.
1.16Submitters and witnesses also raised concerns about the use of ADM in medical and healthcare settings. For example, Suicide Prevention Australia expressed concern that ‘generative AI could inadvertently perpetuate social biases and stereotypes…[through] datasets which are not diverse or representative of diverse population groups’, which could ‘have significant repercussions and potentially reduce access to care or increase stigma’:
Research indicates that stigma and discrimination can increase the risk of suicide. It is critical that all AI-generated data and tools which have the potential to guide decision-making promote equality and support diversity and inclusion. The Government should work with technology companies to ensure that AI-generated content does not entrench disadvantage and perpetuate bias and stereotypes which could potentially increase suicide risk among minority groups.
1.17While the majority of inquiry participants highlighted the potential harms of bias in AI and ADM systems, Professor Anton van den Hengel argued that AI bias is more transparent and therefore more amenable to correction than human bias. He noted:
Humans are inherently biased. The difference between machines and humans is that you can ask a machine how biased it is and it will give you the true answer…The advantage of AI is that its decision-making process is transparent. It is based on the data, and [while] we do have to be careful...[there] are really good technological ways to address these problems that don’t exist with humans.
1.18Similarly, Monash University pointed to AI’s potential as a tool to ‘tackle bias’, arguing that patterns of bias in AI systems are ‘more easily identifiable and thus…able to be remedied quicker’.
1.19However, the ARC Centre suggested that the correction of bias in AI systems remains a challenge:
Bias is a real and important risk from AI systems, but risk mitigation has so far tended to focus on technical solutions and de-biasing toolkits. Merely trying to control bias at the level of output has not yet proven effective, and even the largest technology companies struggle to deal with it credibly…
Transparency
1.20The evidence received by the inquiry revealed a broad consensus that transparency is a key requirement for ADM systems to mitigate the risk of bias and discrimination, and to ensure that a person affected by a decision made or supported by ADM has the capacity to know that an ADM system was used to make the decision, to understand the decision and to challenge and seek review of that decision.
1.21In this regard, some submitters and witnesses felt that the current policy framework does not provide for sufficient transparency of the data, algorithms and other inputs used to arrive at decisions using ADM systems.
1.22The problem of a lack of transparency in AI or ADM systems is often referred to as the ‘black box’ issue. For example, referring to the specific context of healthcare, the Department of Health and Aged Care submission noted that ‘black-box decision-making’ by AI systems, in which the process or reasoning by which a decision is reached is unclear, leads to a lack of trust in and understanding of decisions:
Many AI algorithms are ‘black box technologies’, which have internal mechanisms that are non-transparent and difficult to interpret or explain. This results in both low trust and understanding of AI-made recommendations.
1.23Professor Steve Robson, President of the Australian Medical Association, commented that the making of decisions assisted by ‘black box’ models of AI can make decision-making uncertain and therefore more difficult, and emphasised the importance ensuring that people using AI as a ‘co-pilot’ in decision-making understand the ‘strengths and limitations’ of the ADM system.
1.24The Australian Securities and Investments Commission, discussing the use of AI and ADM in relation to credit assessments, noted that a lack of transparency can also impact the ability of people adversely affected by ADM to challenge or object to decisions:
AI-powered credit scoring can use both conventional and unconventional data sources (e.g. social media activity and mobile phone use) to evaluate credit worthiness. These practices can unfairly discriminate and risk financial exclusion for the most vulnerable, particularly with opaque AI systems making it difficult to challenge outcomes.
1.25The Law Council of Australia (LCA) called for ADM systems to be made transparent to ‘ameliorate the risk of black-box decisions…impacting vulnerable groups’, and outlined the importance of transparency as follows:
Transparency is critical for the responsible use of ADM by Australian organisations, both in the public sector and private sector. Individuals should know when and how ADM is being used in any way which significantly affects their human rights, their legitimate expectations to be informed of how and why they are being singled out for differentiated treatment, and their legitimate expectation that an automated decision is reasonable having regard to the circumstances in which it is made and the impact that this automated decision might reasonably be expected to have on affected humans and the environment.
1.26The LCA argued further that, given the complexity of AI and ADM systems, mere transparency of such systems is not in itself sufficient, and that mitigating the risks of data- or algorithm-based bias and discrimination requires the provision of ‘meaningful and intelligible explanation’ of how an AI system was developed and designed to operate:
[The risk of discriminatory outcomes]…are not mitigated simply by the subject of the decision being informed that AI was used. Rather, effective mitigation also requires meaningful and intelligible explanation about how the AI was deployed. This should include disclosure of the data sets on which it was trained, how the inputs are made into outputs, the rules on which the system operates, how biases have been mitigated, and other details relevant to the circumstances.
1.27Professor Edward Santow observed that ‘interpretability’ of AI and ADM decisions, as a feature of transparency, should encompasses both the ability to understand the reasoning by which an AI or ADM system arrives at a decision, as well as whether that decision complies with any [relevant] technical or legal requirements. The NSW Council for Civil Liberties observed that existing laws governing administrative decision-making may need to be updated to ensure that ADM decisions can be interrogated to ensure they are consistent with natural justice principles and legal requirements.
1.28Digital Rights Watch (DRW) submitted that AI developers should be required to ‘declare the data sources for their foundational models as well as any subsequent data or instruction sets [used for] training and fine-tuning the model’. It considered that this is particularly important given the widespread practice of AI developers training AI on personal information originally collected for a different purpose and often sourced from the ‘data broker industry’.
1.29Further, some submitters noted that, as AI systems can continue to evolve or change after development, additional monitoring and evaluation is needed even after an AI or ADM system is deployed. The LCA, for example, recommended:
ADM processes should undergo random audits conducted by a human to ensure any errors or potential for biases are identified, particularly for those demographics which may be unable to easily seek legal assistance.
1.30However, calls for transparency around the development of foundational or large language models (LLMs) were resisted by some submitters on the basis of commercial concerns. Mr Simon Bush, Chief Executive Officer, Australian Information Industry Association, for example, stated:
In terms of open source [transparency] for the larger LLMs, if someone has invested tens of billions of dollars in a capability and it is embedded into their software, I don't think it is fair to suggest that that should be available for everyone to get that algorithm.
1.31The NSW Council for Civil Liberties, while acknowledging that private entities could have a proprietary interest in AI systems, argued that government should be held to higher standards of transparency:
While private entities have an interest in retaining proprietary ownership over the codes, models, or data in their use of AI, Government agencies should be held to higher standards of transparency and accountability for the purposes of upholding civic society and individual protection.
1.32Mrs Rachael Greaves, Chief Executive Officer, Castlepoint Systems, in discussing risk-based approaches to regulation of AI observed that transparency considerations, or whether to operate an AI system as a ‘black box’ or ‘clear box’ system, may differ depending on the attendant risks of the purpose for which the system is used:
We use explainable AI, which is clear box…but we don't operate in the cultural, media and arts space. If we did, we might use a…[black box] system because the impacts and the risks of not being able to explain it aren't high...
1.33A number of inquiry participants questioned whether generative AI should be used at all in connection with ADM, based on the fact that generative AI systems are predictive systems that will always have the potential to produce outputs or decisions that are unable to be explained.
1.34Professor Jie Lu AO noted that, unlike machine learning models of AI, which generate consistent outputs from consistent inputs, generative AI can produce ‘new’ or different content from the same input. The Existential Risk Observatory submitted that, as generative AI will therefore ‘make decisions that its creators are unable to explain’, generative AI technology is unable to be entirely transparent and should be prohibited from use ‘in critical infrastructure and government services’.
1.35Castlepoint Systems also considered LLMs to be too high-risk for use in ADM in regulatory or other contexts where a wrong decision could have serious consequences for individuals:
There are certainly uses of LLMs…where the decisions that arise could not reasonably cause harm…However, we don't believe there are applications for LLMs in those more regulatory and sensitive contexts, where a wrong decision could cause harm to an individual. Our position is that if the algorithm can't be explained, traced or understood end to end, then it's less likely to be able to be contested. Therefore, if harm does arise to individuals, it's going to be harder to unpick [the decision]…and to solve the problem.
Review of ADM decisions
1.36As noted above, the requirement for transparency of AI systems is related not only to navigating the risks of bias and discrimination but also to the ability of a person affected by an ADM decision to challenge and seek review of that decision.
1.37Broadly speaking, inquiry participants called for government to ensure that ADM decisions are subject to the principles of natural justice and administrative law which underpin the ability of persons affected by a decision to appeal administrative decisions. The Australian Council of Social Services (ACOSS) submission, for example, called for ADM to be ‘reviewable and procedurally fair’, and specifically for a person affected by an ADM decision to be provided with the following:
reasons for the decision, including a reasonably comprehensible and technically accurate explanation of how artificial intelligence has been used in the decision,
a reasonable opportunity to challenge the decision through a procedurally fair process, in which the person is informed about and supported to understand how to challenge the decision, and
information about supports available to the person to assist them in challenging the decision, or about how to access other relevant options or support services where the decision is adverse to the person.
1.38The LCA noted that, from an administrative law perspective, ADM conducted by ‘black box’ AI systems are problematic as they can prevent people affected by discriminatory outcomes from understanding or questioning decisions. The LCA considered that effective mitigation of this risk requires ‘meaningful and intelligible explanation about how the AI was deployed’ including:
…disclosure of the data sets on which it was trained, how the inputs are made into outputs, the rules on which the system operates, how biases have been mitigated, and other details relevant to the circumstances.
1.39The LCA cited the 2021 report of the NSW Ombudsman on the use of AI and ADM as an example of work already undertaken to ensure the use of ADM by public sector agencies within the framework of ‘core administrative law principles such as procedural fairness’. The LCA called for:
…comprehensive regulatory reform to ensure that the use of automated decision making (ADM), including by the Australian Government, is transparent, capable of review, and consistent with administrative law principles…
Accountability
1.40Many inquiry participants raised concerns about accountability in relation to AI systems and ADM. Accountability in this context involves the question of human involvement with ADM systems; and liability or responsibility for the outcomes and consequences of ADM decisions, especially when those decisions lead to harm.
Human involvement with ADM
1.41AI systems, including ADM, can be automated to perform actions or produce outputs without any human involvement or, alternatively, to produce outputs that are augmented by varying degrees of human oversight or involvement, sometimes referred to as the ‘human in the loop’.
1.42In the context of ADM, the evidence of inquiry participants broadly supported human involvement in, and accountability, for ADM decisions, and this was particularly so in settings where the consequences of decisions can have significant impacts on the rights or wellbeing of individuals.
1.43A number of submissions noted the importance of ensuring that ultimate responsibility for ADM in medical and healthcare settings resides with humans. The Australian Medical Association, for example, submitted:
AI must never compromise medical practitioners’ clinical independence and professional autonomy. The ultimate decision on patient care should always be made by a clinician to protect against algorithmic error and safeguard patient interests.
1.44The Royal Australian and New Zealand College of Radiologists emphasised that, while ADM can assist, decisions around healthcare must be primarily made by doctors in consultation with their patients:
Whilst AI can enhance decision making capability, final decisions about care are made after a discussion between the doctor and patient, considering the patient’s presentation, history, options and preferences.
1.45Dr Sandra Johnson observed that clear lines of responsibility and accountability are needed for instances where ‘harm occurs’, noting that a ‘strong legal framework’ around the use of ADM in healthcare is required to ‘ensure safety and reliability for patients and the community’ and provide clear lines of accountability:
…the medical profession needs a legal backup framework and support so that the doctor is not held accountable for machines that have been allowed into the country, allowed into the hospital or allowed into the clinical practice when the doctor didn’t fully understand issues related to the algorithms, the data gathering and so on.
1.46In relation to legal practice, the LCA also argued that lawyers should retain ultimate involvement with and responsibility for decision-making if using AI-driven ADM:
…AI is [usually] part of a decision-making process or decision chain where it may or may not be reliable for the reliance that a human places upon it…The requirement to practise law is a requirement imposed on humans, and those humans should be exercising discretion appropriately to give reliable and accurate legal advice, whether it's influenced or assisted by AI or not.
1.47In relation to government use of ADM, the NSW Civil Liberties Council recommended that all public sector uses involve a human in the loop and provide for persons affected to ‘speak to a natural person’ in relation to decisions made. The LCA, however, submitted that that some government decisions ‘should only be made by humans’, and noted that ‘greater clarity is needed over which government decisions are, and are not, currently subject to AI/ADM, across a range of portfolios’.
1.48The LCA observed in the same vein that administrative law also needs to develop a principled basis for determining what types of decisions are appropriate to involve ADM, and for administrative agencies to clearly identify where ADM is being employed:
There is a gap in existing administrative law principles that needs to be filled, to ensure that administrative decision-makers think carefully about when it is appropriate to incorporate algorithmic decision-making into decision-making processes, and there is a need for a significant uplift in capabilities of administrative agencies to evaluate the extent to which they are using automated systems.
1.49As an example of standards that could be applied to the use and oversight of AI-driven ADM, Professor Peter Leonard, a member of the Media and Communications Committee of the LCA, advised:
[Such standards could]…require, for example, that automated systems are, demonstrably, at least as reliable as humans in making decisions for which they will be relied upon, and to ensure that the humans in the loop have the right skills to evaluate the reliability of the algorithms or AI on which they depend, so they're not just any humans in the loop but humans that understand the limitations of…outputs of automation, presented to them to guide their decisions.
Liability for decisions made using ADM
1.50The evidence received by the inquiry demonstrated a range of views regarding the question of liability for harms arising from the use of AI and ADM systems.
1.51Mrs Lorraine Finlay, Human Rights Commissioner, Australian Human Rights Commission, noted that there is currently no legal framework and therefore a lack of legal clarity in relation to liability for harms arising from the use of ADM. Mrs Finlay submitted:
It is absolutely critical that there is a clear answer given by the Australian government in any [AI] legislation put forward from simply a rule-of-law perspective of making sure that people understand where legal liability lies and can therefore take approaches that mitigate the risk [of using AI systems]...
1.52The Tech Council of Australia observed that the attribution of liability for AI systems is difficult due to the distribution of responsibilities across the ‘tech stack’ that develops and deploys an AI system:
Within the tech stack, we can speak about developers, about folks who are creating the software, but within that you also have different actors who are involved—not just developers of applications but you might have developers of API services, so they may be completely different entities. So [there is a]…distributed chain of responsibility…[Due to] the distributed nature of the entities…[and] people that work in the model, it's really hard to answer that question [of ultimate liability].
1.53Ms Anna Jaffe, Director of Regulatory Affairs and Ethics, Atlassian, commenting on the distribution of responsibilities for developing and deploying AI, advised that ‘liability [for harm arising from the use of an AI product] should…attach along the chain to the person or entity that was responsible for doing or not doing the thing that they should have done to avoid the harm’.
Developer liability
1.54However, a number of inquiry participants suggested that liability for AI and ADM systems should rest with the developers and/or vendors of such systems. The LCA, for example, argued that consumer and product laws should apply to ADM systems, requiring AI products to be fit, safe and reliable for the purpose which they are intended. Similarly, Castlepoint Systems argued that vendors should be liable in cases where users and workers make wrong decisions based on automated decisions and inputs of AI systems:
…as we roll out AI more broadly and users and workers rely on decisions and inputs from AI processes is that, if they make wrong decisions informed by those processes, then they should be in some way protected from culpability.
1.55Dr Sandra Johnson submitted that, in the healthcare sector, developers of AI and ADM systems should be subject to processes for approval of products in relation to safety and reliability are required; and suggested that the remit of existing product safety bodies like the Therapeutic Goods Association could be expanded to include AI systems for use in healthcare settings.
1.56The LCA noted, however, that it may be difficult to establish legal liability in relation to the foundational models or LLMs that form the basis of many AI applications. Professor Peter Leonard, a member of the LCA Media and Communications Committee, observed that, because of the range of potential uses for foundational models, it may be difficult to ascertain that its developer should have anticipated particular uses and their attendant risks. Professor Keith McNeil, who appeared at a hearing of the committee in a private capacity, outlined this difficulty with reference to medical settings:
If you have a company that develops an algorithm specifically for use in medicine, that is one thing that you could pretty well regulate in terms of its outcomes. But, if you use something 'off label', as we say—ChatGPT, for instance, which was never designed to be a medical tool; it just happens to be useful in some areas—[then that is more difficult to regulate] because these algorithms won’t necessarily be developed specifically for medical use...
1.57The ARC Centre observed that consumer law may not be sufficient in its current scope noting, for example, that consumer guarantees would likely apply to ‘downstream [AI-driven] app providers’ but not to ‘upstream’ developers of foundation AI systems such as LLMs. The ARC centre submission suggested that key consumer law concepts such as ‘product liability’ may need to be expanded, for example, to capture manufacturers of AI products, including in relation to ongoing software updates to foundational AI systems.
User liability
1.58In contrast, other submitters suggested that liability for errors or harms arising from the use of AI systems should reside with the user or business employing the AI or ADM system. Mr Joseph Longo, Chair, Australian Securities and Investments Commission, while noting that business will need to build their expertise in relation to AI, observed that the attribution of legal responsibility to businesses would be practical from a regulatory perspective, as businesses are best placed to assess the risks of using AI in the context of their own market and operations.
1.59A number of submitters and witnesses from healthcare backgrounds, for example, preferred that clinicians remain responsible for decisions made with the assistance of ADM, reflecting their ultimate responsibility for the care of patients. Similarly, the Australian Council of Trade Unions (ACTU) considered that employers should be liable for any harmful or discriminatory impacts of ‘decisions that are made with the assistance of AI’.
Regulation of AI in the context of ADM
1.60Many inquiry participants considered that regulation of AI requires specific consideration of ADM in relation to the issues discussed above, namely bias and discrimination; transparency; and accountability. However, there was a range of views expressed about the most effective approach to be taken by government.
1.61At a broad level, the LCA called for comprehensive regulatory reform to ensure that the use of…[ADM], including by the Australian Government, is transparent, capable of review, and consistent with administrative law principles’. In terms of the approach to regulating ADM, the LCA recommended:
…consideration of the regulatory models adopted by other jurisdictions and to determine an optimal and bespoke approach for Australia that reflects the nuances of Australia’s pre-existing constitutional and regulatory framework, and different local market environment.
1.62The ACOSS submission emphasised the importance of consultation with affected groups over proposed uses of AI by government:
Any government use of automation or AI technology that impacts people’s basic needs or rights should be developed through a genuine co-design process with: people affected by the technology, advocacy and community sector organisations representing people affected, and multidisciplinary experts…
Co-design should continue throughout all different stages of the development of the AI technology to be used in the government service, including research, design, data input, training and piloting of the model…
Rights-based regulation of ADM
1.63A number of submitters and witnesses drew on rights-based concepts and principles in their suggestions for ensuring the responsible use of ADM. The LCA noted that, despite Australia not having a ‘comprehensive, human rights-based framework at the Commonwealth level’, the general concept of a principles- or rights-based approach to assessing the potential impacts of AI on individuals could nevertheless inform Australia’s approach:
In the absence of a comprehensive, human rights-based framework at the Commonwealth level, there should be a principled approach to mitigate risks, such as bias in the input data, automatic bias and algorithmic bias, particularly the impacts on the human rights of vulnerable populations, as well as intrusions on the right to privacy. Framing these considerations through the lens of harm minimisation (that is, considering the potential harms to humans and regulating accordingly) may be one way to [mitigate the risks of AI]…
1.64The Deakin Law School (DLS) submission noted that an advantage of using rights-based approaches to assessing the impacts of ADM is that it allows for broader or systemic consideration of its impacts through public interest litigation, which can ultimately provide ‘greater recourse to individuals to challenge automated decisions on broader rights-protective grounds’. DLS cited in particular a recommendation of the Australian Human Rights Commission (AHRC) report on Human Rights and Technology calling on the government to:
…introduce legislation requiring a human rights impact assessment be undertaken before the Government adopts AI to make administrative decisions, including whether it complies with international human rights law obligations, is subject to appropriate review by human decision makers, and is authorised and governed by legislation.
1.65Commenting on the AHRC’s proposed approach, ACOSS noted that human rights assessment of ADM could ‘build on the strengths of similar existing processes in parliamentary human rights compatibility assessments’, and that the standards for such assessments could be based on existing standards:
To be useful, the standards for [rights-based] assessment of ADM…could follow but need to be more detailed than existing high-level guiding principles, such as Australia’s AI Ethics Principles, or the OECD’s AI Principles…The Commonwealth Ombudsman’s guidelines for automated decision-making could be an example and a starting point for the kind of more detailed features needed in standards for impact assessment. For example, these guidelines provide guidance on managing risks of automated decision-making in cases of discretionary decisions…
1.66Similarly, Associate Professor Alysia Blackham, who appeared before the committee in a private capacity, pointed to Australia’s anti-discrimination schemes as a potential model for regulating AI and ADM.
1.67Noting the potential for ‘biased, discriminatory or other harmful outcomes’, the DRW submission called for the Australian government to introduce a strong human rights framework through the ‘creation and enactment of a federal Human Rights Act’. DRW considered that a legislated federal human rights framework would, inter alia, ‘ensure that human rights are proactively considered in any new legislation related to AI’ and provide a ‘powerful tool’ for individuals to challenge and seek remedies for violations of their rights ‘facilitated by AI and ADM technologies’.
1.68In addition to a legislated human rights framework, DRW called for the creation of a ‘separate but complementary Charter of Digital Rights and Principles, which could specifically focus on the application of human rights to existing and emerging technologies’. DRW pointed to the European Union’s Declaration on Digital Rights and Principles as an example of such an approach.
1.69Dr Caitlin Curtis considered that the articulation of AI-specific rights could ‘complement’ and ‘guide’ regulation of AI and ADM as part of a ‘dual approach’ to create a ‘a cohesive, human-centred framework to establish and articulate public expectations and rights with respect to AI systems’. DrCurtiscited the US AI Bill of Rights and Australia’s AI Ethics Framework as models on which Australia could draw to articulate AI-specific rights. In the context of ADM these could include, for example:
a right to fair employment, retraining and education to ensure that AI-driven automation and decision-making do not result in unfair or discriminatory outcomes in workplaces; and
a right to transparency and non-discrimination in ADM to guarantee workers access to explanations of ADM decisions affecting employment; and prevent bias and discriminatory outcomes in relation to workplace matters such as recruitment, pay and job assignments.
AI regulatory body
1.70A number of inquiry participants urged government to consider the establishment of a specific regulatory body to provide oversight of AI, including the use of AI for ADM.
1.71The AHRC, for example, suggested:
When integrating AI into both government and private sector business models and service, the risk of both automation and algorithmic bias should be mitigated. The establishment of a national AI Commissioner, an independent statutory body tasked with assisting the broad adoption of AI in Australia, would support organisations in their efforts to mitigate the risks associated with these biases.
1.72ACOSS considered that a body dedicated to the oversight of AI is necessary due to the diffusion of responsibility for AI across government and the need for continuous evaluation of ADM systems:
Currently, information and policy development about the different uses of AI technology by government services are spread across multiple agencies and reports, and [there is] no well-communicated or dedicated government function for monitoring, evaluating and improving the use of AI technology across government.
1.73Further, ACOSS noted that such a body would be in keeping with recommendation of the Robodebt Royal Commission for establishment of a body ‘with the power to monitor and audit…[ADM] processes’.
1.74DLS also considered that there is a ‘need to establish a specialist AI oversight body in Australia, such as an AI Safety Commissioner’. DLS cited the example of the EU AI Office, established as a specialist body for oversight of AI in preference to reliance on more general legal frameworks such as privacy and consumer protection laws.
1.75DRW suggested that a ‘bespoke’ AI regulator should have a range of powers to ‘supervise the use of AI systems’. In addition to information gathering powers, DRW submitted that such a regulator should:
…have the capacity to impose fines and remedies, and prohibitions on the use of AI systems where they do not meet safety standards…We also suggest [consideration of granting the regulator powers]…to order the retraining of algorithmic technology where there have been identified problems with data provenance, and more general, new powers to restrain the use of AI which has given rise to documented and directly unfair outcomes.
1.76In contrast to calls for establishment of a dedicated AI regulator, other inquiry participants suggested that existing regulatory bodies could be tasked with regulation of AI in specific regulatory contexts.
1.77The Actuaries Institute, for example, called for the federal government to review existing regulatory bodies’ functions in relation to AI to provide clarity on AI and ADM regulation and to instruct ‘all relevant regulators…to issue guidance [on AI and ADM] as needed’. The institute considered that, separate to consideration of other policy responses, this approach could be taken quickly to ensure that guidance on regulation of AI is in place:
While there are also the options of changing regulation or waiting for case law to emerge, we specifically call for guidance, as this can be created relatively quickly and can be targeted towards both hypothetical and real situations. Guidance may be used to clarify any apparent conflicts in regulation across jurisdictions, or to align on language, terminology and interpretation, to reduce any potential confusion for practitioners that seek to interpret and abide by the regulation.
1.78A similar view was expressed by Professor Edward Santow, the Director of Policy and Governance at the UTS Human Technology Institute. Professor Santow called for ‘uplift’ to Australia’s ‘regulatory ecosystem’ to equip regulators with the tool to apply existing laws and regulations to the regulation of AI.
1.79As an example of a regulator-specific approach, Professor Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University, noted that the Therapeutic Goods Administration could be given the role of undertaking pre-market assessment of AI systems and tools for use in clinical settings, with bodies such as the Australian Commission on Safety and Quality in Health Care undertaking ongoing post-market surveillance and assessment of such systems.
International approaches to regulating ADM
1.80A number of inquiry participants considered international approaches to the regulation of AI and ADM as being instructive for Australia in developing its own regulatory response.
1.81The Community and Public Sector Union (CPSU) submission, for example, urged the government to consider the approaches of other jurisdictions to the governance and risk management of AI ‘to ensure a consistent approach to AI regulation’ between Australia and significant international schemes.
1.82Examples of work being done internationally to address and mitigate the risks of AI and ADM included the following:
General Data Protection Regulation (EU)
1.83The EU General Data Protection Regulation (May 2018) provides an individual right not to be subject to a decision based ‘only on automated processing’, where that decision is legally binding or significantly affects them.
1.84The data protection regulation requires an affected individual to be provided with a range of information including in relation to the logic involved in the decision-making process, the right to obtain human intervention, and the right to contest the decision.
1.85Where ADM involves the use of certain categories of personal data, the data protection regulation requires the explicit consent of the affected individual, or the decision to be necessary for reasons of substantial public interest.
Directive on Automated Decision-Making (Canada)
1.86Canada’s Directive on Automated Decision-Making (April 2019) is intended to ensure the use of AI consistently with the core principles of administrative law such as transparency, accountability, legality and procedural fairness.
1.87The directive requires that certain high-risk use of ADM by government involve human review, with the level of risk being determined by reference to factors including rights, health and the economic interests of individuals or communities.
1.88The ARC Centre noted that the directive is process-based regulation rather than product-based regulation and commented:
Generally, these processes make it more likely that AI systems will be fairer, more transparent, and that there is more accountability around automated decisions. The Directive is technology neutral, being mainly concerned with the automation aspect of decision-making regardless of the technology used (AI or other forms of automation).
1.89The ARC Centre considered that the directive provides an appealing blueprint for AI regulation in Australia, although raised a note of concern about the absence of prohibitions on particularly high risk uses or harmful outcomes.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (US)
1.90The United States Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ (October 2023) instructs various federal agencies to audit and report on their use of AI.
1.91With regard to ADM specifically, the executive order specifies minimum risk management practices for US government uses of AI that impact on individual rights or safety. Relevantly for ADM, these include:
…conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI…
Artificial Intelligence Act (EU)
1.92The European Union Artificial Intelligence Act (EU AI Act) (March 2024) establishes a risk-based approach to regulation of AI that includes the explicit prohibition of particularly high risk or harmful applications of AI, such as for social scoring, biometric categorisation and individual profiling.
1.93In relation to ADM, the ARC Centre submission observed that the EU AI Act relies on standards produced by standards bodies for the implementation of regulatory requirements, and commented:
One problem with this approach (especially for Australia, that has not relied significantly on standards in governance historically) is that the questions that arise in AI regulation bring up very difficult questions of human rights and the public interest.
1.94The ARC Centre observed, accordingly, that the use of standards may be less effective to, for example, establish requirements to prevent harmful outcomes from the use of ADM systems.
1.95The submission of DRW indicated support for the EU AI Act approach of prohibiting ‘very high-risk AI applications’. However, it noted that the EU scheme also relies on industry self-regulation for identifying high-risk applications of generative AI systems, raising concerns about ‘problematic incentives’.
Existing government policy
1.96As set out in Chapter 2, the Australian government has implemented a range of policy proposals and initiatives seeking to introduce frameworks and guidance for industry, business and government on the responsible and ethical development and implementation of AI. As set out below, a number of these policy proposals and initiatives are relevant to the regulation of AI and ADM.
Consultation on safe and responsible AI in Australia
1.97In June 2023, the government commenced the consultation on safe and responsible AI in Australia, designed as the vehicle to inform a comprehensive policy response to the regulation of AI in Australia. Following an interim government response in January 2024, the government released its Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings in September 2024 (the high-risk AI proposals paper), confirming the government’s commitment to a risk-based approach to AI focused on regulating AI in high-risk settings, seeking views on proposed principles for assessing whether AI systems should be classified as high risk, and proposing three options for implementing mandatory guardrails for AI for further public consultation.
1.98In relation to ADM specifically, the high-risk AI proposals paper indicated that the government is developing a framework for the use of ADM which will include but is not limited to ADM systems involving AI. The high-risk proposals paper indicated broadly that the framework would include requirements in relation to compliance with administrative law principles and ensuring transparency and accountability around ADM decisions.
1.99In its submission to the inquiry, the Attorney-General’s Department (AGD) advised that the development of the framework remains ongoing:
The department is developing a whole of government legal framework to support automated decision-making systems for delivery of government services, which may include systems run by AI…The framework will consider safeguards that can be put in place to mitigate potential risks associated with the use of automated systems in administrative action, including bias, discrimination and error.
1.100The AGD submission noted that the development of the framework also relates to the government’s response to a recommendation of the Robodebt Royal Commission for ‘legislative reform to introduce a consistent legal framework in which automation in government services can operate’, which has been accepted by the Australian Government. The recommendation called for the framework to identify a number of specific elements, including:
a clear path for individuals affected by decisions to seek review;
where ADM is being used, clear information to be provided explaining in plain language how the process works; and
ADM business rules and algorithms to be made available to independent expert scrutiny.
1.101The government response to the report of the Robodebt Royal Commission confirmed that the government would consider ‘legislative reform to introduce a consistent legal framework’ for automation in government services with review pathways and transparency about ADM.
1.102However, in its appearance before the committee, AGD advised that it had not yet undertaken a comprehensive audit of the extent of the use of ADM at the Commonwealth government level.
Government response to the Privacy Act review
1.103In September 2023, the government published its response to the Privacy Act Review report (February 2023). The government acknowledged the report’s ‘concerns about the transparency and integrity of decisions made using ADM systems’; and agreed to the following proposals intended to address these concerns:
the development of privacy policies setting out the types of personal information to be used in substantially automated decisions which have a legal or similarly significant effect on an individual’s rights;
the inclusion of high-level indicators of the types of decisions with a legal or similarly significant effect on an individual’s rights in the privacy Act, supplemented by Office of the Australian Information Commissioner guidance; and
the introduction of a right for individuals to request meaningful information about how ADM decisions with legal or similarly significant effect are made; and a requirement for information to be included in privacy policies about the use of personal information for ADM decisions with legal or similarly significant effect.
Interim guidance on the use of generative AI tools
1.104In July 2023, the Digtial Transformation Agency published the Interim guidance on government use of public generative AI tools, which, inter alia, provide the following guidance to Australian Public Service (APS) staff in relation to ADM:
generative AI tools must not be the final decision-maker on government advice or services;
outputs from generative AI tools must be critically examined to ensure advice and decisions reflect consideration of all relevant information and do not incorporate irrelevant or inaccurate information; and
noting that generative AI tools may have biases may disproportionally impact some groups, such as First Nations people, people with disability, lesbian, gay, bisexual, transgender, queer and intersex (LGBTIQI) communities and multicultural communities, consideration should be given to whether there are processes in place to ensure that outcomes are fair and meet community expectations.
AI Ethics framework
1.105In November 2019, the Department of Industry, Science and Resources AI Ethics Framework released the AI Ethics framework, which is intended to assist businesses and government to responsibly and ethically design, develop, and implement AI.
1.106The framework includes eight voluntary AI Ethics Principles, which broadly relate to the considerations around the use of ADM outlined above. The ethics principles include, for example, ensuring that AI systems do not result in unfair discrimination; are transparent and explainable; allow people to challenge the use or outcomes of an AI system; and are subject to human oversight and accountability for the use and outcomes of AI systems.
Committee view
1.107The evidence received by the inquiry suggests that, while ADM is already used widely by governments in the context of administrative decision-making, the advances in AI technology, and particularly the advent of generative AI, will see AI increasingly integrated within ADM processes.
1.108While ADM offers productivity gains through increased efficiency and consistency of administrative decision-making, it is widely understood as raising policy considerations around ensuring fairness, transparency, accountability and contestability for individuals impacted by ADM decisions. In Australia, this is ensured in relation to human decision-making by the administrative law system, and so in a general sense the policy challenge for ADM is to ensure that it conforms with the principles that underpin human decision-making.
1.109However, given the key risks of AI, the application of AI to ADM systems raises significant issues. The committee notes that, while concerns about bias and discrimination; transparency; and accountability arise generally in connection with any use of AI, they are compounded in the context of administrative decision-making where decisions can have major impacts on the rights and interests of individuals.
Bias and discrimination
1.110In this regard, the committee heard that the potential for bias in AI systems is well recognised and understood as being a consequence of biases in the data on which AI systems are developed and trained, or as flowing from the design or application of the algorithms used by AI systems. Biases in AI systems can result in the outputs of the system favouring or under-representing certain groups, leading to discriminatory outcomes.
1.111Where such outcomes impact already vulnerable or disadvantaged groups, AI can unintentionally reinforce existing social inequality, and the committee notes the concerns of inquiry participants that the use of AI-driven ADM by governments could replicate discriminatory outcomes at a large scale if not implemented with sufficient safeguards.
Transparency
1.112The committee heard that transparency is a key requirement for AI systems to address potential issues of bias and discrimination, as it allows for the identification of biases in training data or in the operation or ‘logic’ of the algorithm used by AI systems.
1.113However, as there are currently no transparency requirements in relation to AI, ADM systems generally operate as ‘black box’ systems in which the process or reasoning by which a decision is produced is opaque. Inquiry participants noted that the inability to understand or interrogate the internal processes of ADM systems undermines the ability of decision-makers as well those affected by ADM-assisted decisions to rely on, or place trust in, ADM outcomes.
1.114The committee notes evidence suggesting that there are significant technical considerations in relation to making AI and ADM systems transparent. Given their highly technical nature, meaningful transparency of AI systems requires intelligible explanations about the development of the AI system, the data on which it is trained, the algorithm and rules by which the system operates, and any other factors relevant to how the system arrives at a decision. The committee notes that, given its predictive nature, providing for meaningful transparency of systems based on generative AI may raise particular challenges in the context of ADM.
1.115Further, noting that government decision-making is subject to the requirements of administrative law and may be subject to a range of other legal requirements depending on the context, it must also be possible to determine that ADM decisions are arrived at in compliance with all relevant legal and technical requirements.
1.116In addition to the technical challenges of providing meaningful transparency around AI-driven ADM, the committee notes the views of some stakeholders that enforced transparency of AI systems should take account of the proprietary interests of AI developers, particularly in the case of private companies that make significant investments to produce foundational AI models.
Review of ADM decisions
1.117A number of inquiry participants noted that ‘black box’ ADM systems are inimical to the administrative law and natural justice principles which provide that individuals affected by administrative decisions should be able to challenge and seek review of those decisions. This is because, without the ability to understand the basis and reasoning by which algorithm-based decisions are reached, affected persons are likely to be frustrated in seeking to challenge decisions.
1.118In this regard, there were strong calls for ADM to be reviewable and consistent with administrative law principles that support individuals’ ability to challenge and seek review of decisions. In particular, inquiry participants called for comprehensive regulatory reform to ensure that ADM is subject to requirements for the giving of reasons, including meaningful and accessible technical information about any use of AI, as well as procedurally fair processes to inform and support individuals in relation to challenging and seeking review of ADM decisions.
Accountability
1.119In terms of accountability for ADM more generally, inquiry participants broadly supported retaining human involvement in and responsibility for ADM decisions, particularly where decisions significantly impact on the rights or safety of individuals. Retaining the ‘human in the loop’ was seen as critical to guarding against AI bias and discriminatory outcomes, ensuring that ADM is used to augment rather than supplant the professional skill and judgement of human decision-makers, and ensuring clear lines of accountability where ADM decisions lead to harm.
1.120However, the committee heard that there is a need for greater legal and regulatory clarity around the use of ADM by government in terms of the requirement for human involvement as well as what decisions ADM should and should not be used for. Similarly, in professional settings such as healthcare and legal practice, inquiry participants called for the implementation of stronger legal frameworks and standards to ensure the safe, reliable and accountable use of ADM.
1.121The committee notes evidence that, in developing legal and regulatory schemes governing the use of ADM, the attribution of liability for harms arising from the use of ADM can be complicated by the distribution of responsibilities across the ‘tech stack’ that develops and deploys AI systems for myriad uses in industrial, professional and private settings.
Regulation of AI in the context of ADM
1.122While inquiry participants broadly agreed that the regulation of AI should explicitly address the issues identified in relation to the use of ADM, there were different views expressed about the most effective approach to be taken by government.
1.123A number of groups supported the implementation of rights-based approaches to regulating AI and ADM, in which, for example, human rights principles or legal standards provide a framework for assessing the impacts of ADM as well as providing the basis for challenging, and seeking remedies for any harms arising from, ADM decisions.
1.124Other inquiry participants supported specific regulatory approaches to ADM. While some groups, for example, called for the establishment of an AI-specific regulatory body to provide oversight of the use of AI and ADM in the public and private sectors, others suggested that, instead, existing regulatory schemes and laws should be reviewed and reformed as necessary to provide for sector-specific regulation of ADM.
1.125The committee also received a range of evidence concerning international approaches to the regulation of ADM, including in the United States, the European Union and Canada. The committee notes that approaches in these overseas jurisdictions reflect concern for the core policy considerations around the use of ADM revealed in the evidence to this inquiry—namely bias and discrimination; transparency; and accountability—and seek to introduce measures that could be broadly described as informed by administrative law and natural justice principles.
1.126The committee acknowledges there is already extensive work underway to address the risks presented by the increasing use of AI in ADM processes. The review of the Privacy Act made three recommendations regarding ADM regulation, which have all been agreed to by the Attorney-General. The committee supports the implementation of these recommendations, particularly Proposal 19.3, which calls for the introduction of a right for individuals to request meaningful information about how substantially automated decisions with legal effect are made. The committee agrees that it is essential that significant ADM decisions, including those involving AI, are transparent and explainable, and that those impacted by such decisions have a right to obtain explanations.
1.127That the Australian Government implement the recommendations pertaining to automated decision-making in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.
1.128The committee notes the government’s commitment to the development of a policy and legal framework for the use of ADM by government that will include requirements for compliance with administrative law principles and consideration of safeguards for the use of ADM, including in relation to bias and discrimination; transparency; and accountability. The consultation process, which started shortly before the finalisation of this report, follows on from recommendations 17.1 and 17.2 of the Robodebt Royal Commission, which called for reform of the legal framework in which ADM operates in government services, and the establishment of a body to monitor such decisions. These recommendations were accepted by the Australian Government and are supported by the committee.
1.129The development of the ADM framework takes place in the wider context of the government’s commitment to risk-based regulation of AI, and ongoing consultation on the principles for assessing high-risk AI applications and preferred model for implementing mandatory guardrails around such uses. The guardrails outlined in the proposals paper cover many of the issues raised with the committee and discussed in this chapter, including guardrail 3 concerning bias and discrimination in datasets, guardrail 5 concerning human oversight of AI processes, guardrail 6 concerning AI-enabled decisions, guardrail 7 concerning procedures to challenge or review the outcomes of AI processes, and guardrail 8 concerning transparency. The committee supports the inclusion of these matters within the guardrails applying to high-risk uses of AI.
1.130That the Australian Government implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Attorney-General’s Department and be harmonious with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.