Chapter 2Regulating the AI industry in Australia
2.1This chapter considers the evidence received by the inquiry in relation to regulating the Artificial Intelligence (AI) industry in Australia, including:
the risks of AI technologies;
AI policy development in Australia and overseas; and
potential approaches to regulating and mitigating the risks of AI in Australia.
Risks of AI
2.2AI technology brings with it a number of risks that can arise from both the characteristics of the technology as well as its potential uses or applications.
2.3On the issue of AI risks generally, the Law Council of Australia submission stated:
There are foreseeable risks and harms arising from the adoption of AI, as well as risks that may only come to light as the technologies mature, and as new technologies enter the market. Many risks have already been extensively documented and can be categorised in terms of the technical risks, the human rights/societal risks, and what has been described as the ‘existential risks’ arising out of concerns of what it means to be human and how we understand human machine interactions.
Bias, discrimination and error
2.4A major and widely recognised risk of AI is the capacity of AI systems to generate results, or decisions in the case of Automated Decision Making (ADM), that are biased. The problem of bias, also referred to as ‘algorithmic bias’, can arise from AI design or bias within the data used to train an AI system. The submission of Dr Darcy Allen, Professor Chris Berg and Dr Aaron Lane explained:
The biases in generative AI models are, in part, a reflection of the biases inherent in humans. These models are trained on vast datasets…Unsurprisingly biases from the datasets become embedded in the models. This is [an AI system] capturing the prevailing tendencies, preferences, and prejudices of the data it has been trained on.
2.5AI bias can arise not only from biases embedded in datasets but also ‘where data quality is low or poorly aligned to the context of its use’. The submission of the Allens Hub for Technology Law and Innovation and Disability Innovation provided the following discussion of AI bias in the context of Automated Decision Making (ADM) in the disability field:
It is now well established that the use of AI systems often inadvertently exacerbates issues of bias against population groups and communities that are already marginalised by virtue of sex, gender, class, race or other attribute, including disability…
In the case of disability, AI-based classification has been proposed to determine eligibility for disability support funding from the NDIS. However, many people with disability and their representative organisations are concerned that AI systems designed around statistical norms have difficulty with statistically anomalous populations and with the diverse, complex and nuanced realities of living with disability.
2.6Where AI bias occurs in connection with ADM or AI-assisted decision making, such bias can lead to or entrench unfairness or discrimination in decision making. The Department of Home Affairs (DHA) submission commented on the potential for under-representation of minority groups and small communities in datasets to create AI bias and lead to unfair or discriminatory outcomes:
AI also presents the risk of minority groups and small communities being misrepresented in AI models. Under representation in underlying training datasets could result in disparities and unconscious systematic bias between the quality of services, or excessive scrutiny from authorities between majority and minority groups.
2.7The Australian government’s 2023 Safe and responsible AI in Australia discussion paper (2023 AI discussion paper) provided the following examples of AI bias leading to discrimination against individuals based on race, sex or other categories that are protected by Australian anti-discrimination laws:
racial discrimination where AI has been used to predict recidivism which disproportionately targets minority groups;
educational grading algorithms favouring students in higher performing schools; [and]
recruitment algorithms prioritising male over female candidates.
2.8In addition to issues of bias and discrimination, an acknowledged risk of AI is the potential for generative AI systems to produce errors in generated results—also referred to as ‘hallucinations’. The Law Council of Australia submission explained:
Generative AI technologies…can be user-friendly and can automate various tasks quickly to provide users with the data they need. However, their outputs have been criticised as being inaccurate, untruthful, and misleading at times, commonly referred to as the technology producing ‘hallucinations’.
2.9The submission of Dr Darcy Allen, Professor Chris Berg and Dr Aaron Lane explained that the capacity for generative AI to produce errors or hallucinations can be understood as arising from the intrinsic predictive character of the technology:
Unlike traditional search engines designed for delivering accurate, factual information, generative AI operates as a prediction engine. This key distinction underscores its primary purpose: fostering creativity rather than ensuring accuracy. As non-deterministic systems, generative AI models excel in creativity. This creative ability propels their applicability across many new domains as a general purpose technology. But while the non-determinism of generative AI models is the source of their benefits, it also contributes to what are often termed as ‘hallucinations’ in their outputs. These are instances where the AI generates content that — while potentially unique, creative and even plausible — may not be factual.
2.10The capacity for errors in generated content naturally gives rise to questions about the reliability of AI outputs, and to potentially significant consequences in certain contexts. The Australian Publisher Association, for example, noted:
…generative AI can generate incorrect or misleading information with a high level of confidence. This is problematic in critical sectors such as healthcare, legal services, and scientific research’.
Transparency
2.11Another significant risk in relation to AI relates to the transparency of AI systems. The Tech Council of Australia noted that transparency of AI systems is ‘a key principle at the highest levels of international governance and for industry when it comes to responsible AI adoption’. As with the issue of bias and discrimination, the issue of transparency is particularly significant in the context of ADM or AI-assisted decision making.
2.12The concept of transparency of AI systems can be understood as the ability to see into an AI system to ‘understand the nature of the data, connections, algorithms and computations that generate a system’s behaviour including its techniques and logic’.
2.13The Australian government’s interim response to its consultation on the 2023 Safe and responsible AI in Australia discussion paper noted that a lack of transparency in AI systems:
…can make it difficult to identify harms, predict sources of error, establish accountability, explain model outcomes and assure quality. For example, if job applications are assessed by ‘black box’ AI systems (where internal workings are automated and invisible), people affected by discriminatory outcomes may have limited ability to understand or question decisions.
2.14Similarly, The Law Council of Australia expressed its concern that the use of ‘black box’ AI systems for ADM means that the ‘logic behind decisions made cannot be traced or explained’. It noted that reliance on AI based or assisted ADM for government decision-making therefore raises significant issues from an administrative law perspective.
2.15The transparency of AI models is a global concern. One leading attempt to quantify this transparency is the Foundation Model Transparency Index (the Index) produced by the Stanford University Centre for Research on Foundation Models. The Index assesses each model on 100 transparency indicators, split across three categories: upstream (the resources involved in developing a model); the model itself and its properties; and the downstream use of the model. In the most recent edition of the Index, published in May 2024, some of the most prominent models, including OpenAI’s GPT-4, Google’s Gemini, and Amazon’s Titan, received among the lowest scores of 49, 47 and 41 out of 100, respectively. Across all foundation models, the key area of opacity is around data, specifically on the presence of copyrighted, licenced or personal information in training datasets.
Privacy and data security
2.16AI technologies often involve the use of significant amounts of personal data. This can be due to the large data sets that are used to train, and are thus incorporated into, AI systems, as well as the personal information that is gathered by or fed into AI systems and used to generate outputs. The Attorney-General’s Department (AGD) submission explained:
Incorporating AI technologies into products and services can amplify privacy risks through increases in scale, scope, frequency or intensity of personal information handling.
2.17AGD observed that the ‘unique capabilities of AI present opportunities but also additional privacy risks’. For example:
…AI may more readily identify an individual from disparate sources of information, infer sensitive attributes about individuals from information, use personal information to influence consumer behaviour (for example, through content recommendations), and automate decisions that have a legal (or substantially similar) effect on individuals.
2.18The risks associated with the collection and use of personal information by AI also include the ‘inappropriate collection and use of personal information’, as well as leakage and unauthorised disclosure or de-anonymisation of personal information.. In this regard, the Accenture submission observed:
The adoption of AI systems requires organisations to have robust data security policies and practices in place, both to protect data from external threats, as well as internal employees who should not have access to the data stored and generated by these systems.
If not managed correctly, there is an increased risk associated with data breaches and privacy violations, data manipulation…and regulatory compliance risks.
2.19The DHA submission provided a national security perspective on the issue of data security:
AI will amplify the amount and type of data being collected as commercial incentives drive AI developers to collect more data to support the development of more mature language models. Hostile actors will be motivated to seek and aggregate data they steal or obtain from data breaches to enhance models they develop. AI capabilities trained on personal and sensitive data have potential to accelerate adversaries’ efforts to erode our technological advantage and to target our networks, systems and people.
2.20The committee raised the issue of privacy with the large multinational technology companies developing general purpose AI models. In response to questions asking how they scrape and curate data for their training sets, Meta, Amazon and Google each said they use publicly available information to train their products, and pointed to the robots.txt exclusion protocol as a way for web domain holders to block access to the data scraping process on an opt-out basis.
2.21The three platforms also collect and store extensive caches of personal or private information from the users of their other services. It is these massive stores of privately held data which provide Meta, Amazon, Google and other large technology companies with a competitive advantage on the development of LLMs. When asked about how they use this data to train their AI products, the platforms gave largely opaque responses.
2.22Amazon Australia and New Zealand Head of Public Policy, Mr Matt Levey, when asked whether audio captured by Amazon’s Alexa devices in people’s homes has been used to train Amazon’s AI products, confirmed that this occurs for a ‘limited number of voice recordings’ in order to ‘improve the service’. When further asked what proportion of recordings are used in this way, MrLevey said he would provide that data at a later date. He did not, but in response to a list of questions about Amazon’s use of Alexa-captured data, Amazon pointed to its Terms of Use and Privacy Policy, which did not provide answers to the majority of the questions.
2.23This refusal to directly answer questions was an ongoing theme in the responses received from Amazon, Meta and Google. For example, when asked whether Amazon uses content published or stored on Prime Video, Kindle, Audible and other Amazon platforms to train its model, Amazon said ‘we don’t disclose specific sources of our training data.’
2.24When asked about the use of user data from Google’s suite of products to train the company’s AI products, Ms Tulsee Doshi, Google Product Director, Responsible AI, said that ‘in the context of Google Cloud and Workspace…we promised that by default Google does not use customer data for model-training purposes unless a customer has provided written permission to do so or has opted in.’
2.25With respect to other Google services, the responses provided were less clear. At the hearing, Ms Doshi was asked about user data from Gmail and Google Books, and responded ‘our models are trained on publicly available information from the web’. In follow up questions in writing, Google was provided with a list of 28 Google products and services and asked from which of those Google has taken user data for the purposes of training AI products, to which Google again responded that it only trains its AI models on publicly available data, and referred to its privacy policy.
2.26Meta provided similarly opaque responses to a detailed list of questions provided on notice. While Meta confirmed that it has used user content from Facebook and Instagram published since 2007 to train its AI models, provided the content had a privacy setting of ‘public’, it did not answer questions about whether it also used user content from the private messaging applications Messenger and WhatsApp.
2.27There were numerous other important questions about Meta’s use of user data to train its AI products that the company chose not to respond to. For example, Meta spruiked that it does not use data from the accounts of under 18-year-olds to train its models as an example of its responsible approach to AI. However, it did not answer questions about whether that extends to photos of children posted by accounts that are over 18—for example, parents posting pictures of their children—and, further, did not confirm or deny whether it includes photos posted by users who were children at the time of posting, but have since turned 18.
2.28Meta was also asked about whether a user of its social platforms in 2007 could have knowingly consented to their content being used to train AI technology that would not exist for over a decade, to which Meta’s Director of Global Privacy Policy, Ms Melinda Claybaugh, responded: ‘I can’t speak to what people did or did not know.’ A follow up question on notice asking how this could be possible was not answered.
2.29All three companies repeatedly referred the committee to their privacy policies and terms of use as justification for the use of some user data to train their AI products. With the exception of Google Cloud and Workspace, the use of user data to train AI products was conducted on an opt-out rather than opt-in basis, although in order for a user to be aware of their right to opt-out they would need to read the privacy policies. A recent study found that it would take an Australian 46 hours a month on average to read every privacy policy they encounter, based on the average length of each policy and the number of policies Australians are confronted with.
High risk and problematic uses of AI
2.30AI technology brings with it significant risks arising from the potential for it to be used for high-risk or otherwise highly problematic purposes.
2.31The Law Council of Australia, for example, identified a number of high-risk uses that are listed as banned uses under the European Union (EU) Artificial Intelligence Act:
social scoring;
assessing the risk of an individual committing criminal offences solely based on profiling or personality traits;
‘real-time’ remote biometric identification in publicly accessible spaces for law enforcement;
biometric categorisation systems inferring sensitive attributes; and
compiling facial recognition databases.
2.32Generative AI also creates a significant risk insofar as it can be used to produce deep fakes and other material able to be used for nefarious purposes such as perpetrating frauds and scams; sowing dissent; and influencing election outcomes. The Law Council of Australia submission noted:
AI systems can produce light, sound, images, video, text and other phenomena (AI artefacts) which makes it very difficult if not impossible to distinguish AI artefacts from human artefacts. In some contexts, this will create a serious risk of humans relying on an AI artefact as if it was a human artefact, and acting to their detriment. In such contexts, there will be a strong incentive for deception and scamming using AI artefacts.
2.33Deep fakes are AI generated images, videos or audio that realistically depict actual or synthetic people. The RMIT Enterprise AI and Data Analytics Hub identified the significant potential for deep fakes to be used in harmful ways:
An individual may suffer cyberbullying through deep fakes on social media; a business’s service or products may be flooded with false negative reviews; and ultimately, the biggest threat that it can bring is a broad decline in social trust as a result of misinformation and propaganda that undermines the trust in government and democratic institutions.
2.34The use of deepfakes and other AI-generated material or AI tools to harm democracy, sow dissent and erode trust in public institutions is perhaps one of the most significant risks of AI.
2.35The DHA submission stated that, while the ‘assessment of the practical impacts of AI on democracy and trust in institutions is still foundational’, AI could challenge traditional areas of strength of Australian democracy including ‘strong institutions, information integrity and social inclusion’. It noted the ability of generative AI in particular to facilitate malicious actors and threaten democratic representation, accountability and trust:
Threat to representation: Generative AI allows anyone – from passionate citizens to malicious actors – to create unique letters, emails and social media posts that skew elected officials’ perceptions of constituent sentiment, undermining genuine representation.
Threat to accountability: AI-generated information operations and smear campaigns could unfairly influence perceptions of elected representatives, undermining elections as a mechanism of accountability since the basis for people’s vote is factually dubious.
Threat to trust: A proliferation of false and misleading information may make people sceptical of the entire information ecosystem, in turn eroding the trust that fuels civic engagement, political participation and confidence in institutions, and potentially exacerbating polarisation.
Frontier AI models and catastrophic risks of AI
2.36Frontier AI models are general purpose AI systems with capabilities that could severely threaten public safety and global security—for example, AI systems that could be used for designing chemical weapons, exploiting vulnerabilities in safety-critical software systems, synthesising persuasive disinformation at scale, or evading human control.
2.37In November 2023, Australia was one of 28 signatories to the Bletchley Declaration (the declaration) at the first AI Safety Summit, held at Bletchley Park in the United Kingdom. As noted in the Good Ancestors Policy submission, the declaration identified the potential for frontier AI to pose ‘serious, even catastrophic, harm’ due to its capabilities being ‘not fully understood and therefore hard to predict’.
2.38Signatories to the declaration affirmed their responsibility for ensuring the safety of AI systems, and encouraged:
…all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.
2.39Good Ancestors Policy noted that, by signing the declaration, Australia had committed to:
developing policies, including appropriate evaluation metrics, tools for safety research
supporting an internationally inclusive network of scientific research on frontier AI safety, and
intensifying our cooperation with other nations on risk from frontier AI.
AI policy development in Australia
2.40This section provides an overview of recent AI policy developments in Australia.
2.41Given the rapid advances and increasing use of AI technology in recent years, governments in Australia and around the world have been developing a range of policy responses seeking to address its very significant potential risks and harms. The Australian Government’s 2023 AI discussion paper noted that there is a relationship between Australia’s policy responses to AI and the policies being implemented by other countries:
While Australia already has some safeguards in place for AI and the responses to AI are at an early stage globally, it is not alone in weighing whether further regulatory and governance mechanisms are required to mitigate emerging risks. Our ability to take advantage of AI supplied globally and support the growth of AI in Australia will be impacted by the extent to which Australia’s responses are consistent with responses overseas. However, the early responses of other jurisdictions vary.
2.42The 2023 AI discussion paper observed that Australia has relatively low adoption rates of AI, due in part to low levels of public trust and confidence in AI technologies and systems; and that a considered regulatory and governance response, including consideration of the existing regulatory frameworks, is therefore required:
A starting point for considering any response is an understanding of the extent to which our existing regulatory frameworks provide these safeguards. These existing regulations include our consumer, corporate, criminal, online safety, administrative, copyright, intellectual property and privacy laws.
2.43The Department of Industry, Science and Resources submission further noted that, through its consultation on the 2023 Safe and responsible AI in Australia discussion paper, the government has acknowledged that the current regulatory framework in Australia does not sufficiently address the known risks presented by AI, and that existing laws are not sufficient to guard against the potential risks of AI.
AI Ethics framework (2019)
2.44The AI Ethics Framework was released by the Department of Industry, Science and Resources in November 2019, with the aim of guiding businesses and government to design, develop, and implement AI responsibly.
2.45The framework includes eight voluntary AI Ethics Principles, intended to:
achieve safer, more reliable and fairer outcomes for all Australians;
reduce the risk of negative impact on those affected by AI applications; and
help businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
2.46The AI Ethics Principles are as follows:
Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
2.47In 2021, the industry department conducted the AI Ethics Principles Pilot, in which it worked with Australian businesses to road test the ethics principles to identify challenges to their implementation.
National Artificial Intelligence Centre (2021)
2.48The National Artificial Intelligence Centre (NAIC) was established in 2021 to support and accelerate Australia’s AI industry. The role of the NAIC is to:
support AI adoption by small and medium businesses by addressing barriers and challenges;
grow an Australian AI industry;
convene the AI ecosystem; and
uplift safe and responsible AI practice.
2.49The NAIC has established a number of AI initiatives such as:
the Responsible AI Network (RAIN), which brings together experts, regulatory bodies, training organisations and practitioners to focus on AI solutions for Australian industry;
the AI Sprint, which is a three-month competitive program that aims to help startups and entrepreneurs quickly create AI solutions aimed at issues such as cost of living, governance, supply chain resilience, human and environmental wellbeing, and workforce transformation. A second stage of the program provides participants with support and resources to develop and showcase their AI prototypes.
AI Adopt Program (2023)
2.50On 8 December 2023, the industry minister announced the AI Adopt Program, designed to provide $17 million to establish up to five new centres to support Small to Medium Enterprises (SMEs) to make informed decisions about using AI to improve their business.
2.51The purpose of the AI Adopt centres is to showcase the capabilities of AI; provide guidance; responsible and efficient adoption of AI; and provide specialist skills training to help SMEs effectively manage AI.
2.52Grant applications of between $3 to $5 million for businesses, organisations and research organisations to create the AI Adopt centres closed 29 January 2024. Grant recipients were announced in May 2024.
Consultation on safe and responsible AI in Australia (2023)
2.53In mid-2023, the Australian Government conducted a consultation on safe and responsible AI in Australia(the consultation), which sought advice on steps Australia could take to mitigate the potential risks of AI. The purpose of the consultation was to identify:
‘potential gaps in the existing domestic governance landscape and [identify] any possible additional AI governance mechanisms to support the development and adoption of AI’; and
governance mechanisms to ensure AI is used safely and responsibly, including regulations, standards, tools, frameworks, principles and business practices.
2.54The consultation received 447 public submissions (of a total of 510 submissions), which can be accessed on the consultation website.
2.55In January 2024, the government released its interim response to the consultation. The interim response made the following observations on the status of AI in Australia:
AI can create new jobs, power new industries, boost productivity and benefit consumers. Highlighting the benefits presented by AI will boost community confidence
many applications of AI do not present risks that require a regulatory response, and there is a need to ensure the use of low-risk AI is largely unimpeded
our current regulatory framework does not sufficiently address risks presented by AI, particularly the high-risk applications of AI in legitimate settings, and frontier models
existing laws do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur
the speed and scale that defines AI systems uniquely exacerbates harms, and in some instances makes them irreversible, such that an AI-specific response may be needed
consideration needs to be given to introducing mandatory obligations on those who develop or use AI systems that present a high risk, to ensure their AI systems are safe
the need for government to work closely with international partners to establish safety mechanisms and testing of these systems, noting that models developed overseas can be built into applications in Australia.
2.56In light of these observations, the interim response indicated that the government’s regulatory approach would be to:
…ensure the development and deployment of AI systems in Australia in legitimate, but high-risk settings, is safe and can be relied upon, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.
2.57The interim response indicated that the government’s immediate focus would be on considering the implementation of any necessary mandatory legal or other safeguards around AI, which it would undertake in close consultation with industry, academia and the community. The following principles were set out as guiding the government’s approach to supporting safe and responsible AI:
Risk-based approach
The Australian Government will use a risk-based framework to support the safe use of AI and prevent harms occurring from AI. This includes considering obligations on developers and deployers of AI based on the level of risk posed by the use, deployment or development of AI.
Balanced and proportionate
The Australian Government will avoid unnecessary or disproportionate burdens for businesses, the community and regulators. It will balance the need for innovation and competition with the need to protect community interests including privacy, security and public and online safety.
Collaborative and transparent
The Australian Government will be open in its engagement and work with experts from across Australia in developing its approach to the safe and responsible use of AI. It will ensure there are opportunities for public involvement and draw on technical expertise. Government actions will be clear and make it easy for those developing, implementing or using AI to know their rights and protections.
A trusted international partner
Australia will be consistent with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks. This includes substantial risks to humanity from frontier AI, addressing the high-risk applications of AI, as well as near-term risks to individuals, our institutions and our most vulnerable populations.
Community first
The Australian Government will place people and communities at the centre when developing and implementing its regulatory approaches. This means helping to ensure AI is designed, developed and deployed to consider the needs, abilities and social context of all people.
2.58The interim response indicated that, to further it’s ‘overall objective to maximise the opportunities that AI presents for our economy and society’, the government’s next steps would relate to:
preventing harms from occurring through testing, transparency and accountability;
clarifying and strengthening laws to safeguard citizens;
working internationally to support the safe development and deployment of AI; and
maximising the benefits of AI.
Voluntary AI Safety Standard (2024)
2.59At the request of the government, the NAIC is developing a Voluntary AI Safety Standard, which will help organisations using AI to achieve best practice for safe use of AI.
2.60The NAIC convened a meeting of leading AI specialists in February 2024 to develop the scope, design principles and core content of the voluntary standard. Roundtables hosted by Responsible AI Network partners were held in March 2024, with key insights and early content from the roundtables tested with a cross section of stakeholders.
Temporary AI Expert Group (2024)
2.61In February 2024, the industry minister announced the establishment of a temporary AI Expert group, which was to operate until 30 June 2024. This action arose from the government’s interim response to its 2023 consultation.
2.62The group included experts from a range of areas including law, ethics and technology, and its purpose was to advise government on testing, transparency, and accountability measures for AI in legitimate but high-risk settings to ensure the safety of AI systems.
2.63The group was to consider a definition of ‘high risk’ in relation to AI technologies and uses; options for mandatory guardrail measures for high-risk systems, with a focus on testing, transparency and accountability; and options for regulatory mechanisms.
Budget 2024-25 AI related measures
2.64The Department of Industry, Science and Resources submission states that the government provided $39.9 million over five years for development of policy and capability to support the adoption and use of AI technology. The related Budget measures include:
establishment of a permanent AI Advisory Body (effectively to continue the role carried out by the temporary AI Expert Group) to advise on AI capability development and regulatory settings to support the design, development and deployment of AI systems in high-risk settings.
repurposing $21.6 million to bring the NAIC into the industry department in support of its role enabling industry engagement and driving collaboration on AI.
providing $11.5 million over 2024-25 and 2025-26 to the industry department to support its role of analysing industry capability and coordinating the government’s safe and responsible AI agenda.
Proposals paper on guardrails for AI in high-risk settings (2024)
2.65In September 20024, the government released a proposals paper titled Introducing mandatory guardrails for AI in high-risk settings: proposals paper (the proposals paper). The proposals paper followed and built upon the government’s 2023 consultation on safe and responsible AI in Australia, as well as the government’s interim response to that consultation process, released in January 2024, which expressed its commitment to a risk-based approach to regulating AI and to ‘develop a regulatory environment that builds community trust and promotes AI adoption’.
2.66The proposals paper noted that the safe and responsible AI consultation had shown that Australia’s ‘current regulatory system is not fit for purpose to respond to the distinct risks that AI poses’, and that overseas governments are:
…reforming existing regulations and introducing new regulations to address the risks of AI, with a focus on creating preventative, risk-based guardrails that apply across the AI supply chain and throughout the AI lifecycle.
2.67In this context, the purpose of the proposals paper is to seek views on:
Defining high-risk AI: the proposed principles for determining high-risk AI settings and their potential application to general-purpose AI models;
Mandatory guardrails: 10 guardrails proposed for AI systems in high-risk settings to reduce the likelihood of harms occurring from the development and deployment of AI systems. These preventative measures would require developers and deployers of AI in high-risk settings to take steps to ensure their products are safe, including in relation to:
testing during development and in deployment to ensure systems perform as intended and meet appropriate performance metrics;
transparency about how AI products are developed and used with end-users, other actors in the AI supply chain and relevant authorities; and
accountability for governing and managing the risks associated with AI systems.
Regulatory options to mandate guardrails: 3 options for implementing the proposed mandatory guardrails:
Option 1: Domain specific approach – adapting existing regulatory frameworks to include the proposed mandatory guardrails;
Option 2: Framework approach – introducing framework legislation, with associated amendments to existing legislation; or
Option 3: Whole of economy approach – introducing a new cross-economy AI Act.
Other AI policy guidance, initiatives and inquiries
2.68A range of other recent policy guidance and initiatives are relevant to the development and use of AI technology in Australia. These include:
Automated Decision-Making Better Practice Guide (2019)
2.69The Office of the Commonwealth Ombudsman Automated Decision-Making Better Practice Guide was originally published in 2007 and was updated in 2019. The guide offers guidance for agencies to ensure compliance with administrative law and privacy principles and best practice in the implementation of AI and ADM systems.
Data and Digital Government Strategy (2023)
2.70In December 2023, the government released the Data and Digital Government Strategy, setting out its intent to harness analytical tools and techniques, including AI and machine learning, to predict service needs, gain efficiencies in agency operations, support evidence-based decisions and improve user experience.
Copyright and Artificial Intelligence Reference Group (CAIRG) (2023)
2.71Also in December 2023, the Attorney-General announced the establishment of a Copyright and Artificial Intelligence Reference Group (CAIRG) to better prepare for future copyright challenges emerging from AI, including the use of copyright material as inputs for AI systems, potential copyright infringements in AI outputs, and the copyright status of AI outputs.
Using AI to deliver public services briefing (2023)
2.72In October 2023, the Department of Prime Minister and Cabinet published a briefing paper titled How might artificial intelligence affect the trustworthiness of public service delivery? The paper explores how using artificial intelligence AI to deliver public services might affect the trustworthiness of public service delivery.
AI in Government Taskforce (2023)
2.73In September 2023, the government announced the establishment of the AI in Government Taskforce, jointly led by the Digital Transformation Agency (DTA) and the industry department. The purpose of the taskforce is to help the Australian Public Service (APS) to engage with and deploy AI in a way that is safe, ethical and responsible.
Government response to review of the Privacy Act 1988 (2023)
2.74As noted above, AI technologies often involve the use of significant amounts of personal data, and thus give rise to significant privacy risks. The review of the Privacy Act 1988 commenced in October 2020 with the release of an issues paper followed by a discussion paper in 2021, which put forward proposals for reforming the Act.
2.75The government response to the review, published on 28 September 2023, indicated that the reforms to the Act would include consideration of their 'interaction with related but separate work on strengthening cyber security, the use of…[AI] including automated decision making and digital identity’.
Interim guidance on government use of public generative AI tools (2023)
2.76In July 2023, the DTA and industry department issued initial interim guidance on government use of publicly available generative AI platforms.
List of Critical Technologies in the National Interest (2023)
2.77In May 2023, the government issued the List of Critical Technologies in the National Interest, which was developed through a public consultation process. The purpose of the list is to ‘align Australia’s critical technologies ecosystem [and] support consistency and coordination across related government activity’.
2.78The list identifies critical technology fields for which Australia:
has research and other relevant capabilities;
needs uninterrupted access through trusted supply chains; and
must retain strategic capability or maintain awareness.
2.79The list includes AI technologies, which are defined to include:
machine learning, including neural networks and deep learning;
AI algorithms and hardware accelerators; and
natural language processing, including speech and text recognition, analysis and generation.
Productivity Commission AI research papers (2024)
2.80In February 2024, the Productivity Commission published three research papers around the theme of Making the most of the AI opportunity: productivity, regulation and data access.
AI uptake, productivity, and the role of government: outlining how Australia stands to benefit most from AI technology and, consequently, where governments should focus their policy efforts;
The challenges of regulating AI: considering government regulation of AI, including international approaches; and
AI raises the stakes for data policy: considering how AI raises the stakes for data policy, and how Australian policymakers can address the questions about data rights and incentives that AI presents.
Next Generation Graduates Program
2.81The Next Generation Graduates Program is delivered by the Commonwealth Scientific and Industrial Research Organisation (CSIRO). The program aims to attract and train AI and emerging technology specialists to drive growth of the Australian technology sector.
2.82The Department of Industry, Science and Resources submission noted that the 2023-24 round of the program would fund around 160 postgraduate students, including a regional stream.
Federal parliamentary inquiries
2.83A number of federal parliamentary inquiries over recent years have considered issues relating to AI policy development. These include:
Inquiry into civics education, engagement, and participation in Australia
2.84In March 2024, the Joint Standing Committee on Electoral Matters commenced an inquiry into civics education, engagement and participation in Australia in March 2024.
2.85The terms of reference for the inquiry included:
…the mechanisms available to assist voters in understanding the legitimacy of information about electoral matters; the impact of artificial intelligence, foreign interference, social media mis-and disinformation; and how governments and the community can prevent or limit inaccurate or false information influencing electoral outcomes.
Inquiry into the digital transformation of workplaces
2.86In April 2024, the House of Representatives Standing Committee on Employment, Education and Training commenced an inquiry into the digital transformation of workplaces. The inquiry is considering the rapid development and uptake of automated decision making and machine learning techniques in the workplace.
Inquiry into the use of generative artificial intelligence in the Australian education system
2.87In May 2023, the House of Representatives Standing Committee on Employment, Education and Training commenced an inquiry into the use of generative artificial intelligence in the Australian education system.
2.88The inquiry considered issues and opportunities presented by generative AI and explored current and future impacts on Australia’s early childhood education, schools, and higher education sectors.
Inquiry into promoting economic dynamism, competition and business formation
2.89In January 2023, the House of Representatives Standing Committee on Economics commenced an inquiry into promoting economic dynamism, competition and business formation.
2.90The inquiry included consideration of the potential impacts and risks of AI in relation to competition.
Inquiry into the influence of international digital platforms
2.91In September 2022, the Senate Standing Committee on Economics 2022–23 commenced an inquiry into the influence of international digital platforms.
2.92The committee’s report, tabled in November 2023, addressed a number of the risks and challenges of generative AI such as enabling profiling on digital platforms and the generation of deep fake materials.
AI policy development in other countries
2.93This section provides an overview of significant policy initiatives being considered and implemented in overseas jurisdictions.
2.94As noted above, the challenges for Australian policymakers in relation to the development and regulation of AI technologies are global, as countries around the world seek to implement policies that realise the benefits of AI while mitigating its significant risks.
2.95As well as offering guidance on the available policy choices for policymakers, a number of submissions pointed to the need for Australia to ensure that its regulation of AI takes into account, and maintains some degree of consistency with, the approaches being implemented in other countries.
2.96The ANU Tech Policy Design Centre, for example, commented:
Tech policy and regulation does not exist in a vacuum. Policy makers should be aware of international approaches to tech policy when determining the appropriate approach for Australia, a digitally advanced liberal democratic country.
A small group of countries - often the European Union, United Kingdom, and United States of America - are the most common reference point context for contextualising how countries are grappling with emerging issues. While these countries remain essential points of reference for Australia given their profile as similar digitally advanced liberal democracies, it is useful to be aware of the diverse AI governance approaches being explored by countries around the world for broader context.
2.97The submission of Deloitte Australia emphasised the importance of ensuring Australia’s approach to regulating AI is compatible with international regimes:
Implementing a flexible and globally compatible regulatory framework to facilitate cross border collaboration on AI development through shared standards and global management of challenges associated with AI will be important. Jurisdictional compatibility also reduces regulatory complexities for organisations operating in multiple countries, enabling smoother cross-border AI deployments and reduces the risk of actors gaming jurisdictional regulation.
2.98Similarly, the Australian Institute of Company Directors noted that ‘to maintain competitiveness, the Productivity Commission has recommended against Australia adopting an AI regulatory approach that is inconsistent with or more stringent than that of overseas [countries]’.
2.99The CSIRO submitted that the AI policies of other advance-economy countries of relevance to Australia are those of Canada, the European Union, Singapore, the United Kingdom and the United States. Some key measures from those countries highlighted by submitters are set out below.
2.100Other submitters pointed to contrasting approaches in countries such as China and South Korea. Some key measures from those countries highlighted by submitters are also set out below.
Canada
2.101AI measures package: in April 2024, the Canadian Prime Minister Justin Trudeau announced a package of measures to upgrade the country’s AI capabilities. The measures will make high-performance AI computing capabilities available to Canada’s AI ecosystem to build new AI models and tools; and include instruments designed to improve AI uptake and adoption by Canadian companies and supporting AI startup companies.
2.102Artificial Intelligence and Data Act (AIDA): in June 2022, Canada introduced the AIDA as part of the Canadian Digital Charter Implementation Act, which, as at September 2024, remained under review by the Canadian House of Commons. The AIDA is similar to the European Union Artificial Intelligence Act (EU AI Act) (see below) in proposing a risk-based approach to AI that requires ‘high-impact AI systems to meet safety and human rights standards’. The AIDA would also introduce criminal provisions ‘to prevent reckless AI use and ensures accountability for AI systems in international and interprovincial trade’.
2.103ADM impacts assessment: The Australian Council of Social Service noted that Canada requires impact assessment of ADM systems to assess the risk of impacts on the rights, health or wellbeing of people; and requires certain safeguards such as human intervention where risks are high.
European Union
2.104EU General Data Protection Regulation (GDPR): the GDPR is an EU regulation relating to information privacy that became effective in May 2018. The GDPR includes the individual right to opt out of automated decision-making (ADM) processes, and for a person to request human intervention in order to can contest ADM decisions’.
2.105AI start-up measures: announced in January 2024, these measure are intended to support European start-ups and SMEs to build and adopt AI models. They included a modification to the regulation of the European High-Performance Computing Joint Undertaking that grants startups and the wider innovation community access to supercomputers optimised for AI.
2.106EU Artificial Intelligence Act (EU AI Act): the EU AI Act commenced on 1August 2024 and establishes a regulatory and legal framework for AI within the EU. Universities Australia described the purpose of the AI Act as:
…the first comprehensive regulation on AI…[that] will likely serve as a global standard for regulations. The Act will split [AI] applications into three categories of risk: First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, highrisk applications, such as a CV-scanning tool ranking applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or “high-risk” are largely unregulated.
2.107The Australian Chamber of Commerce and Industry (ACCI) submission pointed to considerations around the interaction of the AI Act with the GDPR:
One notable concern with the introduction of…[the AI Act] is the close interaction with other general regulations, e.g. General Data Protection Regulation (GDPR): although each law has a different focus, since AI relies on data (in many cases personal data), the overlap is significant and could lead to overly complex and duplicated requirements for business.
Singapore
2.108Model AI Governance Framework: introduced in 2019 and updated in 2020, the framework provides guidance to private sector organisations on addressing key ethical and governance issues when deploying AI solutions. It details measures to enhance transparency in AI models such as user notices and disclosures, and recommended practices for explainability and transparency including model documentation.
2.109Draft Model AI Governance Framework for Generative AI: introduced in 2024, the generative AI framework expands on the 2020 framework to address issues emerging from generative AI, including a statement on content provenance and on the importance of ‘transparency about where content comes from as useful signals for end-users’.
2.110AI Verify: an AI governance testing framework and toolkit designed to help organisations validate the performance of their AI systems against AI ethics principles through standardised tests.
2.111AI Verify Foundation (AIVF): a not-for-profit foundation to concentrate expertise from private sector organisations including Adobe, Amazon, Google, IBM and Microsoft to develop AI testing frameworks, standards and best practices.
United Kingdom
2.112National AI Strategy: published in September 2021, the aim of the strategy is to provide for long-term investment and planning in relation to AI; support the transition to an AI-enable economy; encourage innovation and investment; and establish appropriate governance arrangements.
2.113AI policy white paper: In March 2022, the UK government published a white paper, titled Establishing a pro-innovation approach to regulating AI, and setting out the government’s proposals for implementing a proportionate, future-proof and pro-innovation framework for regulating AI. The Law Council of Australia noted that the key recommendation of the white paper was that:
…the UK Government should introduce principle-based regulation, with implementation to occur through existing regulators, but with central coordination to ensure proper oversight and to address cross-cutting risks.
2.114A UK government response to the white paper was published in February 2024, which continued the emphasis on ‘voluntary measures directed to AI developers’ and implementation of AI measures through existing regulators.
2.115AI Research Resource (AIRR): announced in November 2023, the AIRR is a cluster of UK-based advanced, high-performance computers that can be used by UK researchers for AI research and development, and to create foundation/frontier AI models.
2.116UK AI bill: in July 2024, the newly elected Starmer government announced its intention to introduce an AI bill.
United States
2.117Executive Order on Safe, Secure and Trustworthy Development of the Use of AI: in October 2023, President Joe Biden issued an executive order to regulate AI by mandating:
2.118…the adoption of technical standards for AI covering safety and security concerns; the passage of data privacy legislation; measures to support workers, consumers, patients and students; and measures to promote innovation and competition.
2.119The focus of the order is on guidelines and regulation as opposed to an EU AI Act-style regulatory enforcement scheme. However, some similarities with the EU approach include ‘testing and monitoring across the lifecycle of the AI system, an emphasis on post-market/post-deployment monitoring, privacy law, and adherence to cybersecurity standards’.
2.120National Artificial Intelligence Research Resource (NAIRR) pilot: launched in January 2024, the pilot is intended to connect researchers and educators with AI computational resources, datasets and training resources needed to advance AI development. A bipartisan bill called the CREATE AI Act to fund the NAIRR has was introduced into the US Senate in July 2023.
2.121Artificial Intelligence Risk Management Framework: in July 2024, the US National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework, which was developed in part to respond to the 2023 executive order. The framework is intended to assist organisations to identify risks posed by generative AI and proposes actions for generative AI risk management.
China
2.122New Generation Artificial Intelligence Development Plan: introduced in 2017, the plan outlines China’s goals for AI development and covers ethical norms and regulations in relation to safe development of AI technology.
2.123Governance Principles for a New Generation Artificial Intelligence: introduced in 2019, the principles outline eight principles for responsible development of AI.
2.124AI data regulations: China has introduced a range of specific regulations relating to security and management of data critical for AI development, including the Cybersecurity Law (2017), Data Security Law (2021) and Personal Information Protection Law (2021). Collectively, these instruments regulate how data is collected, stored, and used by imposing very strict requirements on data transfer, government access to data and data localisation.
2.125AI regulations: Since 2021, China has produced a series of binding regulations described by the ANU Tech Policy Design Centre as ‘some of the most significant approaches by a major power to govern AI’. The regulations relate to ‘recommendation algorithms, deep synthesis, generative AI, and facial recognition’
2.126Notwithstanding the suite of AI regulations in place, the ANU Tech Policy Design Centre submission noted that ‘China is now examining the creation of an overarching national AI law’.
2.127Interim regulations on generative AI: in 2022, China established a set of provisions regulating the impacts of algorithmically generated and recommended content. These have since been replaced with interim regulations on generative AI, intended to guide the AI industry while more comprehensive legislation is drafted.
South Korea
2.128National Strategy on AI: introduced in December 2019, the strategy sets out South Korea’s goals for AI development and includes substantial investment in AI technologies, fostering talent and creating an AI research and development ecosystem.
2.129Ethical Principles in Human-Centric AI: released in 2020, the government released the ethical principles are based on five principles of safety; fairness; transparency and accountability; cooperation; and privacy and autonomy, which serve as a foundation for more detailed regulations and practices.
2.130Personal Information Protection Act (PIPA): The ANU Tech Policy Design Centre observed:
AI regulation in…[South Korea] is closely linked with their data protection laws, most notably the PIPA [that] governs the collection, use and sharing of personal data, which is crucial for operating AI systems.
2.131Sector-specific AI regulation: The ANU Tech Policy Design Centre also observed that South Korea has introduced regulations in areas where AI application is prevalent, such as autonomous vehicles, healthcare and finance.
Risk-based regulation of AI
2.132Many inquiry participants commented on the merits of pursuing a risk-based approach to regulating AI, with some drawing on examples from overseas jurisdictions to illustrate the respective advantages and disadvantages of risk-based regulation of AI.
Australia’s adoption of risk-based regulation of AI
2.133As noted above, the high-risk AI proposals paper has confirmed the government’s commitment to a risk-based approach to regulating AI in Australia which focuses on regulating AI in high-risk settings. In confirming this approach, it noted that certain features of AI make it well suited to a ‘risk based and preventative approach to regulation’. These include AI’s potential to:
cause significant harms that could spread quickly across the economy and community;
cause harms not only to people but also to groups of people and society at large;
cause catastrophic harm, such as via weaponisation;
cause highly context-specific harms—for example, an AI system deployed for a particular purpose in one sector may present very low risk of harm; but applied in a different sector may present a high-risk of harm; and
create uncertainty about harms that might arise as AI technology evolves, requiring regulatory measures that successfully adapt to new forms of AI.
2.134The high-risk AI proposals paper continues the government’s consultation on safe and responsible AI commenced in 2023, by seeking views on the proposed principles for assessing whether AI systems should be classified as high risk and proposing for further public consultation three options for implementing mandatory guardrails for high-risk AI. It indicates that, in designing a risk based regulatory regime for AI, the government will consider:
the levels of risk and key characteristics of known risks; and
the balance of ex ante (preventative) and ex post (remedial) regulatory measures to effectively target and mitigate known risks of AI.
Advantages and disadvantages of risk-based regulation
2.135The government’s interim response to its consultation on the 2023 Safe and responsible AI in Australia discussion paper described risk-based regulation as a framework in which AI model development and application to specific uses ‘is subject to regulatory requirements commensurate to the level of risk they pose. It noted that a benefit of a risk-based approach is that it allows for low-risk AI development and uses to proceed while AI development and applications with a higher risk of harm are targeted by regulation.
2.136The interim response listed the benefits of a risk-based regulatory approach that were identified in submissions to the consultation as including:
providing regulatory certainty through categorising risks and obligations;
minimising compliance costs for businesses that do not develop or use high-risk AI;
balancing the costs of regulation against the value of risk mitigation; and
allowing for flexibility and responsiveness as AI technology develops.
2.137The limitations of a risk-based approach identified in submissions to the consultation included:
risks not being accurately and reliably predicted and quantified;
specific risks not being well captured by general categories of risk;
unpredictable risks not being considered, particularly for frontier models designed for general-purpose application;
risk being underestimated where assessment is voluntary or carried out via self-assessment;
categorisation of risk being reductive and ineffective;
lack of an appropriate legislative foundation or regulator to administer the risk-based framework; and
diverse views on what defines high-risk AI.
Approaches to risk-based regulation
2.138The government’s interim response to its consultation on the 2023 Safe and responsible AI in Australia discussion paper indicated that, while submitters to the consultation broadly favoured a (non-voluntary) risk-based approach to AI, there were mixed views on what form such regulation should take. It observed:
Industry groups preferred an approach that focused on strengthening existing laws, through amendments or providing regulatory guidance. On the other hand, consumers and academic groups were more likely to call for new laws or a specific AI Act like those being pursued in the EU, Canada and Korea.
2.139A number of submitters indicated their support for stand-alone legislation to provide an overarching risk-based legislative scheme for regulating AI, similar to the approach taken by the EU AI Act. The Regional Universities Network, for example, observed:
[The EU AI Act] risk-based approach to regulation appears compatible with the Commonwealth Government’s interim response to the Safe and Responsible AI in Australia consultation process, which identifies the need for regulatory requirements commensurate to the level of risk they (specific AI systems) pose. A risk based approach allows low-risk AI development and application to operate freely while targeting regulatory requirements for AI development and application with a higher risk of harm.
2.140The Human Rights Law Centre also recommended that adoption of a regulatory model similar to the EU AI Act, noting in particular the benefits of its focus on transparency obligations:
The EU mandates that AI developers and deployers maintain detailed documentation of their processes and products. The EU also requires that AI-generated content is identifiable, and provides clear information about the system’s purpose and operations. Such transparency is essential for safeguarding human rights and ensuring public oversight and accountability.
2.141However, some inquiry participants questioned whether Australia should adopt a comprehensive legislative scheme in the manner of the EU AI Act. SBS, for example, submitted:
Standalone AI legislation or regulations (similar to those in the European Union) are not necessarily required in Australia, as we already have relevant regulatory frameworks…thus avoiding duplication and unnecessary layers of regulation.
2.142A number of submitters and witnesses suggested that existing laws and regulatory schemes should be reviewed and adapted to regulation of AI, with new AI-specific legislation being enacted if required to address any regulatory gaps unable to be addressed by existing laws and regulations.
2.143The Financial Services Council, for example, suggested that the AI industry should not be ‘unduly burdened with red tape, particularly where industry-specific regulation already exists to mitigate the risks’.
2.144The Governance Institute of Australia (GIA) drew attention to existing statutory frameworks that could be adapted to regulate AI, including the Corporations Act 2001, the Privacy Act 1988 and the Australian Consumer Law within the Competition and Consumer Act 2010. The GIA recommended that the government review the effectiveness of these existing schemes for regulating AI.
2.145The Digital Industry Group Incorporated (DIGI) noted that ‘many uses of AI systems in Australia are already subject to regulatory frameworks’. Before enacting any AI-specific laws, DIGI urged consideration be given to clarifying and strengthening the adequacy of existing regulatory frameworks for regulation of AI.
2.146Similarly, Mrs Lorraine Finlay, Human Rights Commissioner, Australian Human Rights Commission, observed:
…there are laws already in place when it comes to, for example, discrimination…[which] already encompass technology and advances in technology, and the general principles don't change. That's where we say there is a need to use existing laws and adapt them to changing circumstances where appropriate but then identify…[any gaps or new issues] that have emerged because of AI [which] need specific regulation.
2.147However, the government’s high-risk AI proposals paper identified a number of gaps and uncertainties in relation to the capacity of existing laws and regulatory schemes to address AI risks, including that:
many existing laws were originally drafted on the presumption that humans are taking actions and making decisions, and are unclear in respect of providing accountability and ensuring legal responsibility of AI developers and deployers;
the ability of individuals to rely on existing laws to seek redress for potential harms caused by AI is unclear and dependent on the transparency of the development and operation of AI models;
there is regulatory uncertainty about the policies need to address the risks of AI due to gaps in knowledge at the development phase of training AI models; and
there is uncertainty about the ability of individuals to enforce their rights and the availability of appropriate remedies under existing laws leading to enforcement gaps.
Auditing and assurance of AI systems
2.148A number of inquiry participants identified the need for systems of AI audit and assurance to support the development of a responsible AI industry in Australia.
2.149Good Ancestors Policy noted in its submission that a University of Queensland survey of public views on AI found a strong public interest in auditing of AI systems, with a requirement for mandatory pre-release auditing of AI being the second most selected priority of respondents.
2.150Infosys suggested that the development of a safe and responsible AI industry requires a new field of auditing: algorithmic auditing and assurance. The purpose of this field would be:
…to provide standards, practical codes, and regulations to assure users of the safety and legality of their algorithmic system, producing a sustainable ecosystem of trustworthy and responsible AI.
2.151The Kingston AI group (KAI) called more specifically for the implementation of an AI auditing body and framework in Australia to ‘help build a brand in trustworthy AI for Australia’. Noting that the AI industry is largely based overseas, KAI observed that, like food safety, promoting trust in AI in Australia is ‘based on holding AI companies to the statements they make about their products, not about attempting to regulate an industry that is largely based overseas’. The KAI submission explained:
Rather than relying on a static legislated set of requirements, we advocate for a dynamic approach: the creation of an AI audit body that serves as a central authority for both the private and public sectors. This body would oversee AI applications across industries and operate in an agile and time-sensitive manner, ensuring that the system is able to evolve alongside the rapid advancements in high-performance AI models.
2.152The KAI submission cited examples of overseas jurisdictions that have already implemented audit-style schemes for AI, including in the US, UK and France. A number of other inquiry participants also supported calls for establishment of a body tasked with oversight of AI, such as an AI safety institute, with various functions including the evaluation of AI models and applications.
Committee view
Risks of AI
2.153AI technologies, and specifically the generative AI systems that have become prevalent in recent years, present a number of recognised risks that have the potential to cause significant harms.
2.154The committee heard that the problem of bias in AI arises due to embedded biases within datasets used to train AI models, such as under- or over-representation of certain social groups, or due to biases in the design or application of the algorithms used by AI systems.
2.155Depending on the context in which AI is deployed, bias in AI systems can lead to unfair, unsafe and discriminatory outcomes, particularly where the outputs of AI systems are relied on to support human decision-making as part of ADM processes. The potential for such systems to be applied to decision-making and other purposes en masse can amplify the scale of any harms flowing from AI bias.
2.156Further, the committee notes that generative AI presents particular challenges due to the creative and predictive character of that particular AI technology, which can produce inaccurate, misleading or simply untrue outputs, sometimes referred to as ‘hallucinations’. In high-risk settings such as healthcare and legal services, where individual safety or rights may be impacted, the potential for error in generative AI outputs poses significant policy and regulatory challenges.
2.157More generally, the committee heard concerns about the potential for AI systems to be applied to improper and inappropriate uses, such as social scoring, compiling facial recognition databases and biometric categorisation of individuals. Further, ‘frontier’ AI models may possess capabilities that pose severe threats to public safety and global security, such as the ability to design chemical weapons. The committee notes that, as the current generation of AI systems are powerful, widely accessible and capable of being applied to myriad new uses, the prevention of the use of AI for problematic and catastrophic purposes is a key consideration in relation to the regulation of AI.
Transparency
2.158The committee heard that the transparency of AI systems is a key requirement to address the problem of AI bias. Transparency of AI systems allows those using or assessing the operation of an AI system to understand how that system produces its outputs. To understand and correct AI system bias requires transparency of all those elements which have a bearing on the way that a particular AI system output is produced. This commonly includes visibility of the data on and method by which the AI system was developed and an understanding of the algorithm or ‘logic’ by which the system generates outputs in response to user inputs. Given the technical and complex nature of these aspects of AI systems, a key consideration for the regulation of AI is to ensure that AI systems are meaningfully transparent to the users of, and those impacted by, such systems.
2.159Further, as noted above, as predictive systems, generative AI models are inherently creative and therefore resistant to definitive understanding of how a particular output is produced, which may mean that generative AI is unsuitable for use in certain high-risk settings.
2.160The glaring absence of transparency from the developers of general-purpose AI models, including from those operating in Australia such as Meta, Google and Amazon, was highlighted by submitters to this inquiry, and is a matter of record in Stanford University’s Foundation Model Transparency Index, among other sources. This issue is particularly acute when it comes to the data inputs to these models, and these companies resisted the committee’s efforts to inquire about what data is used in training datasets, including evading questions about copyrighted data, personal information, and data from the users of their ubiquitous social media platforms and other digital services.
Risk-based regulation of AI
2.161The committee notes that the key challenge for Australia and governments worldwide is to introduce policies and regulatory arrangements that effectively mitigate these risks of AI while fostering its vast potential economic and social benefits. This challenge has been compounded by the advent of generative AI, which has been and continues to be rapidly adopted into commercial products and services. In this regard, policy- and law-makers worldwide are grappling with not only the as-yet-unknown potential risks of AI but also the applications and impacts of AI technologies that are already manifest.
2.162The committee notes that, while some major jurisdictions overseas are further progressed than Australia in implementing schemes for the regulation of AI, over recent years Australia has implemented a range of policy initiatives intended to guide and foster the ethical and responsible use of AI, including the AI Ethics Framework, the National Artificial Intelligence Centre and the Voluntary AI Standard. These initiatives have served to engage and build capacity in Australia businesses and industry in relation to the development and use of AI, and such initiatives will continue to play an important role in Australia’s AI ecosystem.
2.163The committee notes that, throughout the course of the inquiry, the government has continued to progress its extensive and comprehensive consultation process on the development of safe and responsible AI in Australia, which commenced in June 2023. In January 2024, the government’s interim response to the consultation acknowledged that that Australia’s legal and regulatory environment is currently insufficient to address the risks of AI and indicated that it would pursue a risk-based approach to the regulation of AI, focusing on the introduction of guardrails around the use of AI in high-risk settings.
2.164In September 2024, the government’s interim response was followed by the release of a proposals paper on the introduction of mandatory guardrails for the use of AI in high-risk settings, which seeks further consultation on the proposed principles for determining high-risk uses of AI, and the mandatory guardrails that will apply to the development and deployment of high-risk AI to reduce the risks of potential harm.
2.165The committee notes that there is broad support for the government’s commitment to pursuing a risk-based approach to the regulation of AI, with inquiry participants recognising that this approach will most efficiently address the significant risks of AI while allowing for the development and deployment of low-risk uses without undue regulatory burden. A risk-based approach to regulating AI is also consistent with approaches being implemented in significant overseas jurisdictions, which is important to ensuring that Australia’s AI industry can develop in parallel to the major AI industries in those countries.
2.166In addition, the proposals paper poses three possible approaches to mandating the guardrails for high-risk AI: adapting existing regulatory frameworks to include the proposed mandatory guardrails; introducing framework legislation, with associated amendments to existing legislation; or pursuing a whole-of-economy approach via the introduction of new, cross-economy and AI-specific legislation.
2.167The committee notes that in this regard there was a range of views presented in the evidence received by the inquiry. While there was significant support expressed for broad framework or EU AI Act-style legislation to provide a comprehensive scheme for AI regulation, there were some arguments in favour of reviewing and, if possible, adapting existing relevant laws and regulatory schemes to provide coverage of AI, particularly from the big tech companies developing general purpose AI models.
2.168Ultimately, the committee believes the breadth and scale of the threats posed by the use of AI in high-risk settings warrants a comprehensive, whole-of-economy approach. For that reason, and for the reasons set out in the proposals paper and by many submitters to the consultation process—including that it would result in siloed, inconsistent regulation exacerbating gaps and inconsistencies in existing regulation—the committee does not support the first option, which would merely adapt existing frameworks to the proposed guardrails.
2.169The committee sees the merits in both the second and third options put forward for implementing guardrails for AI in high-risk settings. There are specific areas of regulation where existing legislation will need to be amended to maintain and strengthen the rights and protections Australians currently enjoy, as AI becomes more ubiquitous, particularly in areas that are inherently higher risk. An example of this is provided in Chapter 4, in the context of industrial relations and work health and safety laws. Similarly, the interim report highlighted the need for reforms specific to the use of AI in political and electoral contexts.
2.170However, without a whole-of-economy approach to AI regulation there is a risk of fragmentation and, as specific areas of law or uses of AI are prioritised for reform, there is a risk that certain rights and protections fall through the cracks. Given the rapidly developing nature of AI technology, there would also be logistical challenges associated with potentially needing to frequently refresh reforms across so many different pieces of legislation.
2.171A whole-of-economy approach, such as a standalone AI Act, would address these issues and would not preclude targeted reforms to existing legislation where it is particularly warranted. The committee acknowledges that this approach could potentially introduce undesirable duplication, and the specific implementation of this approach should seek to minimise this risk. However, the committee believes the benefits of whole-of-economy coordination and coverage; regulatory efficiency; and cohesion with the approaches or intended approaches of other jurisdictions, including the EU, Canada, the UK and the US (including Colorado), outweigh these limitations.
2.172That the Australian Government introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI, in line with Option 3 presented in the government’s Introducing mandatory guardrails for AI in high-risk settings: proposals paper.
2.173The proposals paper also asks whether a principles-based or list-based approach should be adopted to defining high-risk uses of AI. The committee believes a purely list-based approach may be overly prescriptive and risk unintentionally omitting high-risk uses, particularly given the fast-moving nature of AI technology and its applications. On the other hand, a purely principles-based approach may create uncertainty. Accordingly, the committee supports a principles-based approach with a non-exhaustive list of examples of high-risk uses. This approach was supported in the consultation on the proposals paper by a number of submitters, ranging from the Law Council of Australia to the Tech Council of Australia.
2.174That, as part of the dedicated AI legislation, the Australian Government adopt a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.
2.175A significant amount of the inquiry’s time was dedicated to discussion of the structure, growth and impact of general-purpose AI models, including the LLMs produced by large multinational technology companies. Some of these firms appeared before the committee, such as Amazon, Meta and Google.
2.176There are unique risks and concerns associated with the operation of these models, which have only intensified through the committee’s direct interaction with the developers. These include the lack of transparency around the models, the massive market power these companies already enjoy in their respective fields, their record of aversion to accountability and regulatory compliance, the overt and explicit theft of copyrighted information from Australian copyright holders, the non-consensual scraping of personal and private information, the potential breadth and scale of the models’ applications, and the disappointing avoidance of this committee’s questions on these topics.
2.177The committee believes these issues warrant a regulatory response that explicitly defines general purpose AI models as high-risk. In doing so, these developers will be held to higher testing, transparency and accountability requirements than many lower-risk, lower-impact uses of AI. While some of these firms have opposed this proposition in their submissions on the proposals paper, including on the basis of compliance being burdensome, the firms with the resources to develop these models have the resources to, at the very least, comply with such requirements.
2.178That the Australian Government ensure the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large language models (LLMs).